Kuhn – Lecture 5

November 8, 2015

Thomas_KuhnScience does not begin with facts and then construct theories out of them. Nor does science begin with theories and then just find facts that would confirm them. Both these conceptions conceive of science as though it were a discourse that was completely context free. In the first case, facts are simply available as though they were waiting for interpretation of a specific kind, and in the second case, theories are simply open to facts as though there were no inertia or hindrance to the smooth progress of science from one theory to the next, each equally open to the possibility of falsification.

One of the first philosophers of science in the Anglo-American tradition to take the idea of context or background to scientific activity seriously was Thomas Kuhn.[1] Loosely characterised this approach might be called ‘historical’. What does it mean to treat science as though it were part of history rather than outside of it? It means first of all to take scientists seriously. It is to treat what they do the same way that we would analyse the thoughts and actions of French peasants or the 13th century or a military general in the 20th. First of all to record scientific achievements correctly (who thought of what at what time), and secondly to examine exactly how scientists come up with their theories in relation to the material they were investigating. What it certainly is not is the importation of philosophical theories from the outside (like verification or falsification) followed by squeezing the scientific activity to see whether it would fit these ideal models.

However much the logical positivists and Popper might differ, they both have the same idealised view of science: there is a sharp difference between theory and observation; knowledge is cumulative tending towards a true understanding of the universe; the language of science is precise and science is unified; science is either inductive or deductive; the key question of the philosophy of science is legitimacy and validity, rather than the contingency of discovery. Against all the suppositions Kuhn puts forward exactly the opposite: there is no sharp difference between observation and theory; knowledge is discontinuous; science is neither a tight deductive structure nor an inductive reading of facts; scientific concepts are not precise; nor is science unified; context is important and science is historical and temporal.

At the heart of the idealised picture of science is scientific progress. This is the view that science is leading to ever increasing knowledge about the universe and that finally one day we will have a theory of everything, and I suppose, science can come to end, because there will be no more questions that need to be answered. So first of all there are pre-scientific theories of the universe that we find in the religious and mythical texts (like Genesis), and then we get the first science, Aristotelianism (though this is a really a mixture of science and occult explanations), then Newtonism (which is the first science proper) and then finally in our times, Einsteinian science which is a response to the crisis that befell Newtonism. One imagines that sometime in the future, though one can never tell, there will be fourth science that will replace Einstein, but only because it contains more truth and is close to the universe as it really is than all the other theories that we have had. Such a view of the history of science, we might call ‘convergence’, since it views the series of scientific discoveries of converging on the true understanding of reality.

There are two problems with this image of science. One is temporal and the practical. First of all it has a conception of time, where the past is merely a stepping stone to the present but the past has no meaning in itself. For how can we measure the progress of science in this sense unless we imagine an end towards which it is moving and this end is supposed to be an advance on the past?[2] But how can we know that this advance is real unless we can stand outside of time and measure it? Is it not really the case that past is not the stepping stone to the future, but that we judge the past from the vantage point of the present, and in looking back, project a false teleology back into the past? In terms of the past itself, there were numerous possibilities and the present that we now occupy did not have to occur. Equally the present that we now stand in has infinite possibilities, so we cannot know what the future will be.

In terms of the practice of science, we also know that his temporal picture of progress is false. This is what Kuhn discovered when he did his own historical research. Rather than the history of science demonstrating that each scientific period progressed into the next one moving to ever greater level of truth and closing the distance between discourse and reality, we find that it is discontinuous and non-cumulative and that there is no reality out there, which we could know independently and through which we could measure the relative truths of each discourse because reality itself is a creation of discourse and not its external validation.

What does it mean to say that the history of science is discontinuous rather than continuous, non-cumulative rather than cumulative? Let’s go back to the image of progress where science moves smoothly from Aristotle, to Newton, to Einstein. What is left out in this description is the gaps or spaces between each scientific theory (or what Kuhn calls a paradigm, because it is more than just a theory) and it can leave this gaps out because the fantasy of some ultimate truth which is where reality and discourse are the same. As soon as we leave this fantasy behind, and realise that it too is a creation of a discourse (in this case metaphysics), then we can see that there is no transition of one to the other. Rather, they are separate or incommensurable. They belong to different worlds.

Again this is visible when we actually study the history of science, rather than project our own view of progress upon it. What we get instead of single continuous line is line of breaks: Aristotle, Newton and Einstein. What then causes these breaks? Why don’t we just go from one science to another in an endless progression towards the truth? The answer for Kuhn is to be found in history and not in the philosophical image of science as a universal method.

The new picture we have of science is now as follows: first we have pre-science – normal science – crisis or revolution – new normal science – new crisis (Chalmers 1999, p.108). When at first science begins to emerge we don’t have a collection of facts or theories that explain facts, rather we have a competition between many theories (Chalmers gives the example of the state of optics before Newton). Gradually different scientists will be attracted to the one explanation. What is important is that the reason for this attraction will not just be scientific or rarely just scientific. It will be a combination of difference elements some of which will be psychological, sociological and even metaphysical. As more and more scientists come on board, what is in a state of chaos will coagulate into a paradigm. Only at that point will normal science be possible (the kind of science that Popper and the logical positivists describe). But even a paradigm, which makes normal science possible, is not made up of merely theories and observations. Like Newtonian mechanics, it is constructed from fundamental laws and theoretical assumptions, standard techniques and methods of investigation, general rules about exceptions and application to reality and most importantly of all a kind of world view or metaphysics which will unify all of this together (in Newtonism, that we exist in an infinite deterministic universe).

Rather than anomalies, as Popper would have us believe, being antithetical to normal science, it can quite happily accept them as long as they don’t attack the fundamentals of the paradigm. Everyone can get happily to work devising their experiments and putting in their grants and anyone who goes against the status quo can be banished to the outer darkness. The paradigm is reinforced by the institutions themselves. If you don’t follow the paradigm you won’t get the grant money, and anyway the education of young scientists make sure that they follow the paradigm. This is clearly what Kuhn saw when he first looked into the history of science as a practicing scientist: young scientists were taught the idealised image of science that had nothing at all to do with the history of science at all.

So why do paradigms fall? Why are revolutions inevitable? This is because of the anomalies. Because no discourse can close the gap between itself and reality, there will always be the nagging doubt that something is not being explained by the paradigm. As more and more money and experiments are thrown at these anomalies, cracks begin to appear in the scientific establishment. Thus a normal science begins to take the form of the pre-science. Rather than scientists doing experiments, they start having ideas and hypothesis. Some might be said to be cranks and fools, but gradually they begin to attract other scientists. Again Kuhn is clear that the reason for this cannot be scientific or logical, because there is nothing in one paradigm that would justify the leap to another, for there is no commensurability that would link them together, such that one might say that one is truer than the other. The reasons are practical. As more and more are attracted to this new science, gradually a new paradigm is born and the whole process repeats itself. We get a new normal science, where again people can happily devise their experiments, apply for grants and get promotion. Until of course the cracks start appearing again.

Although this appears to be an accurate representation of what scientists do, there is a fundamental problem with it. If we are to give up the image of science as the progress towards a truth in which the distance between discourse and reality is progressive closed, for a discontinuous series of closed paradigms, then does this make scientific truth relative? We can distinguish normal science from pseudo-science because of how paradigms work (the difference between astronomy and astrology), but that does not make science itself any truer. Can we say that Einstein, for example is truer than Newton? We want to feel that this is the case, but Kuhn’s principle of incommensurability will not let us do so. The answer to this question, as we shall see when we read Kuhn’s The Structure of Scientific Revolutions in more detail, is that we might have to change what we mean by truth, rather than giving up truth altogether. It means that we have to think of truth as a practice or activity, rather than as a representation of a reality that stands outside of us waiting for our discourse to catch up with it.

Works Cited

Chalmers, A.F., 1999. What is this Thing Called Science?, St. Lucia, Qld.: University of Queensland.

Sharrock, W.W. & Read, R.J., 2002. Kuhn : Philosopher of Scientific Revolution, Cambridge: Polity.

[1] He might have been the first American philosopher to take this idea seriously. In France, this was the dominant view of science (Sharrock & Read 2002, p.1).

[2] It is science (think for example of evolution) that should make us suspect such teleological arguments.

Popper and Falsification – Lecture 4

October 28, 2015

Karl_PopperWhat we want is some criterion which will allow us to distinguish science from any other discourse. In other words what makes science, science, as opposed to religion? What is specific to the method of science? Our simplest response to this question is that science deals with facts that are objective (out there in some way) and that religion has to do with belief and is subjective. We might want to say, then, that science is true, and religion is not. When we looked at this simple definition, however, the less certain and clear it seemed. For the idea that science is made up of many observations of facts that are then converted into theories breaks down in the problem of induction, which, in its most succinct form, is the impossibility of leaping from a singular judgement to a universal one. No amount of logical finessing will get you from a particular to a universal. This would seem to imply that science is no more objective than religion, and that a theory is as much a belief as any faith.[1]

Moreover, it was also clear that the inductionist picture of science was not accurate at all, since facts are not just littered throughout the world such that we pick them up and notice common characteristics from which we then construct some universal law. On the contrary, we already come to facts with a pre-existing theory, which determines which facts we take as relevant or not (or even which fact we can see). As Ladyman explained, Newton did not find the law of gravity in Kepler’s data, he already had to have it in order to interpret the data (Ladyman 2002, pp.55–6).

This reversal of the relation between theory and facts, that theory is first and facts second, is the basis of the next philosophy of science that we shall look at, Popper’s theory of falsification, and indeed rose out of the insurmountable problems of ‘inductionism’. His argument is that we should give up induction as the basis of science, but such a rejection would not lead to irrationalism. Rather we substitute for induction, deduction. But did we not argue already in the first lecture that deduction could not be the basis of science, since deduction is merely tautological? Deductive logic tells us nothing new about the world, but only analyses what we already know, whereas as would say that science actually tells us something about nature that we didn’t know before.

Deduction does not work as a basis of science only if we move from the singular to the universal, but if we go from the universal back to the singular then deduction does work. Indeed, this move from the universal back to the singular is exactly, Popper argues, how science operates. We do not start with facts and then make laws, rather we start with laws and then we attempt to test them with facts. The logical point is that we can’t go from observations to theories, even if the observations themselves are true, but it is possible the other way around. We can go from theories then back to observational statements to show that the theory is false. Thus to use Chalmers example, if someone was to see a white raven outside the lecture room today, then this would prove deductively that the statement ‘All Ravens are black’ is false. Such deductive arguments are known as modus tollens, which take the form if P, then Q. ⌐Q, therefore ⌐P (Chalmers 1999, p.61).

When we look at the history of science, this seems exactly what happens. Take for the example, Eddington’s proof of Einstein’s theory that gravity bends light. If the theory was correct then a star that was beyond the sun should be displaced from the direction of the observer so that we could see it. Normally the light from the sun would mean that these starts would not be visible to us, but would be if the light of the sun was blocked. Eddington managed to measure just such a displacement with the eclipse of the sun in 1919. For Popper, the point of this story is that he could have proved otherwise. In other words, Einstein’s theory could have been falsified, if there had not been any displacement.

The real difference between science and religion or any other discourse is not the theories or hypotheses put forward, but how they are tested. Popper is adamant that science is creative as any other human discourse and that the origin of this creativity is outside any logical explanation. That someone comes up with such an idea at such a time cannot be rationally explained. Thus we don’t know how Galileo or Einstein came up with their ideas, and why not someone else, or at different time and place, but what we do know that what makes these creations scientific, as opposed to anything else is that they can be falsified (this is the difference between context of discovery and context of justification). In the opposite case, it does not seem possible to falsify a religion logically. I can always find a reason to believe something. Think for example of the classic problem of evil in theology. How do I justify the existence of God with evil in the world? It is perfectly possible to find such a reason, as Leibniz did that this is the ‘best of all possible worlds’, and it is just our lack of human understanding that prevents us from seeing it so.

Here we might need to know a little of the story of Popper’s life. When he was young he was a communist and of course Marxism was treated as a science. He says that one day in went on a march with his friends and they were attacked by the police and some of them were killed. He was so shaken by this incident that he had to speak about to his political leaders. They told him that these deaths were necessary for the political emancipation of the workers as was explained by scientific Marxism. But what then would falsify Marxism, for they did not seem to be any instance, including the death of his friends that could not be explained by it.[2]

This is precisely the difference between a science and a pseudo-science (religion is only a pseudo-science when it takes itself to be answering scientific questions, otherwise it is perfectly meaningful for Popper): a pseudo-science has the answer to everything and can never not be true, whereas a science does not have the answer to everything and can always be false. It is this that demarcates, to use Popper’s word, empirical science, from anything else and it is a question of method, rather than logical form, by which he means the positivist obsession with the correlation of statements with aspects of reality. Metaphysics and religion are only pseudo sciences when they pretend to be sciences. If they do not, then there is nothing intrinsically wrong with them. They are certainly not meaningless which is just derogatory word, rather than having any useful philosophical sense.

If what makes a scientific theory scientific is falsification, what exactly makes a falsification? Can any falsification be scientific? Such a broad generalisation does not seem to be correct because just to falsify something would not make it a scientific theory. I could falsify physics, by quoting Genesis but no one would think I was being scientific. The answer here is intersubjective testability. One cannot conceive of how it would be possible to set up an experiment that would test my falsification of physics that claimed God had created the universe in the way that it is described in Genesis. One can imagine, however how it might be possible to test the falsification of Newtonian science through the prediction made by Einstein, which is entirely what the example from Eddington proves, and it is perfectly possible that other scientists could conceive of such an experiment, whether in principal or in practice.[3]

Could a theory always secure itself by simple adding an ad hoc modification every time a falsification was produced? Thus, to use Chalmers’s example, we could take the generalisation that all bread was nutritious to be falsified by the death all the members of French village who ate bread. We could then qualify our theory by saying that all bread is nutritious except when it is eaten by these members of the French village and we could do this every time any falsification was discovered. Such ad hoc modification would completely destroy any progress in scientific discovery. How then can we distinguish between an authentic and inauthentic ad hoc modification (Chalmers 1999, p.75). In this example, the modification cannot be falsified, so it does not tell us anything new about the world. It in fact tells us less than the original theory that all bread nourishes. So an authentic modification must be one that is also falsifiable. If we had said instead that all bread nourishes except one that is contaminated by certain fungus called Claviceps purpurea, then this would be an authentic ad hoc modification, since it could be tested and falsified, and thus does tell us something new about the world.

This distinction between authentic and inauthentic ad hoc modifications of scientific theories, however, tells us that we should not over-estimate falsifications of theories. When we look at the history of science we can see that ad hoc modifications can confirm rather than deny a theory. Take the case of the discovery of Neptune. Irregularities in the orbit of Uranus predicted that there must be another planet that had not be observed. Rather than reject Newton’s theory, scientists argued that a planet must exist that would explain it. Thus, the fact that Neptune was found in 1846 confirmed Newton’s theory rather than falsified it. Rather than seeing science as just a series of falsifications which lead from theory to the next, Aristotelianism to Newtonism to Einstein, we should see it as the confirmation of bold conjectures and the falsification of cautious ones. For what difference does it make to science if one falsifies conjectures such as the universe is made of porridge or confirms a cautious one? But how then do we determine what makes a bold conjecture? The only answer to this must be background theories themselves, for only in relation to them could we know what would be bold or timid. The background knowledge is therefore the cautious conjecture (what we take to be correct) and the bold conjecture flies in the face of what everyone thinks is the case. We can see, then, what the real fundamental difference between the falsificationist and inductionist is. The first takes the history of science seriously, and the second has no conception of the history of science at all. There is no background knowledge. Rather facts are accumulated as though there were no context at all and science existed in the eternal present.

Is falsification immune to criticism then? The answer must be unfortunately not. The real problem is still the relation to the theory and the observation. All we can say deductively is that if there is O, then the falsity of T follows if the O is not given, but it tells us nothing about the standard of the evidence itself. What if the evidence is incorrect? Perhaps when person who said that the raven was white and no idea what white was. Perhaps the photograph of the white raven was created in Photoshop, and no such evidence exists. Popper does not have a better story about the correctness of evidence than the positivist.

Moreover, when we actually look at science, it does not take the simple form of ‘All swans are White’…. Rather, sciences are made up of complex collection of universal statements which are interrelated to one another. Now if a prediction tells us the theory is false it tells is that one of the premises might be wrong but not which one or even that our own experience might be the problem. It might not the theory that is out, but the ‘test situation’ itself, because we cannot isolate the premise which allows us to falsify the theory (this is known as the Duhem/Quine thesis). So to use Ladyman’s example, if we were to try and predict the path of a comet, the law of gravity would not be sufficient, so if the predication were incorrect we would not know that it was the theory of gravity that was being falsified or something else (Ladyman 2002, pp.77–8).

Even if such an isolation were possible, falsification does not seem to capture actually what science and scientists do, for when we look at the history of science we do not find one great conjecture following another, but that scientists stick to their theories despite the fact that they can be falsified or they adopt a new hypothesis even though all the known evidence at the time should have killed them off at birth. This is what we find when we look at the detail of the eventual transition from the Aristotelian to the Copernican view of the world as Feyerabend and Kuhn describe it. It is certainly was not the simple falsification of the one by the other. Science works, to some extent, because scientists are dogmatic and not open to falsification. If that is the case, how is it possible to differentiate, or demarcate, science from any other dogma? Will we not have to use different criteria?

Works Cited

Chalmers, A.F., 1999. What is this Thing Called Science?, St. Lucia, Qld.: University of Queensland.

Ladyman, J., 2002. Understanding Philosophy of Science, London; New York: Routledge.

Popper, K.R., 2002. Unended quest, London; New York: Routledge.

[1] When we look at science as a method this is a problem. We might ask, however, if we think of science as an activity, whether it is such a problem.

[2] The source of this story can be found in Popper’s autobiography (Popper 2002, pp.30–8).

[3] Does this open Popper to a more pragmatic account of science than an epistemological one? For if testability is inter-subjective how are we to describe it? Popper appears to want to separate questions of method from question of practice, but later criticisms will in turn want to question this distinction by asking whether it is really the case, when we look at the history of science, that scientists really are committed to the principle of falsifiability. This will be part of Kuhn’s critique of Popper.

Philosophy as a Way of Life

October 21, 2015

HadotNow a days we tend to think of philosophy as an academic discipline you study in university and that to be a philosopher is to be a professor of philosophy. But that is not always how it as been, according to the French philosophy and historian of ancient philosophy Pierre Hadot (Hadot 1995, pp.81–125). Thus, in ancient philosophy it was perfectly possible to be a philosopher without having written anything, for what mattered was not the discourse of philosophy in itself (being knowledgeable about philosophical theories), but living philosophically.

Living philosophy, Hadot tells us, was a spiritual exercise and he is very clear why it has to be called this, even though in our ears it might sound overtly religious, or particularly, because of Ignatius Loyola, Christian.[1] Spiritual, because it was more than merely moral or intellectual exercise, and consisted of a total transformation of one’s existence. Hadot divides these spiritual exercises into four distinctive disciplines, which we will describe in turn:

1. Learning to live

2. Learning to dialogue

3. Learning to die

4. Learning to read

Learning to Live

If the aim of philosophy is to teach one how to live one’s life better, what is it that prevents us from doing so? The answer for ancient philosophy is the passions. It is because we cannot control our passions that we end up being miserable and unhappy. The art of living well, therefore, is measured by the ability to control ones passions and this is what philosophy can teach you. One of the schools of philosophy, the Stoics, argued that there were two origins for human unhappiness: we seek satisfaction in possessions that we cannot have or can lose, and we try to avoid misfortunes that are inevitable. Philosophy teaches us is that the only matter that truly lies in our power are moral goods. The rest we should accept with indifference. I cannot control what happens to me, but what I can determine in my attitude to it. It is through the spiritual exercises of philosophy that we can free ourselves from our passions and view any misfortune that happens to us with equanimity. The most important of these exercises in Stoicism is ‘attention’ (prosoche). It is only through constant self-vigilance that I can learn how to control my passions. The fundamental rule of life is to be able to determine what depends on me and what does not, and I can only do that through permanent attention to myself and to the outside world. One of the most important aspects of the self-vigilance is attention to the present moment. Much human unhappiness is caused either by being weighed down by the past or hoping too much from the future. It is better to live in the present moment and accept reality as it, the simple joy of existing, as the other major school of ancient philosophy, Epicureanism, calls it.

The intellectual exercises of philosophy, reading and writing, listening and talking to other, were never simply for the sake of gaining more knowledge, but applying this knowledge to how one lives ones own life. Thus physics, for example, was never just about learning about the structure of the universe, but also demonstrating the scale of one’s own petty human worries. In an infinite universe, how much do my own fears and desires matter? Nature is indifferent to my unhappiness and only my own freedom should concern me, which is the freedom to be who I am.

Learning how to dialogue

Intellectual and spiritual activity is never a solitary affair. This is why the ancient philosophical schools were always communal in form. I learn to think for myself by thinking with others. It is not so much what is said that is important but that one speaks, because it is only through interacting with others that I can gain any self-knowledge. As Hadot writes:

The intimate connection between dialogue with others and dialogue with oneself is profoundly significant. Only he who is capable of a genuine encounter with the other is capable of an authentic encounter with himself, and the converse is equally true. (Hadot 1995, p.91)

What I learn is that philosophy is a journey and not an end. Wisdom is always something towards which I can only ever aim and never reach. Such a relation of authentic speech with others is always more important than writing and appearing to be knowledgeable. Again the aim of philosophy is self-transformation and not knowledge, if knowledge means here theory or discourse.

Learning to Die

Learning to die is not a morbid obsession with death. Quite the opposite, it is to learn not to fear death. For what is the most important aspect of human life is that it transcends death. Socrates, the most important philosopher for both the Stoics and the Epicureans, was willing to die for his beliefs, because he realised that what was the most important thing about him was not his body, but his ideas, and these would live on despite him. Far more important than ones individual life is truth itself. To learn to die, therefore is not to be obsessed about death in a morbid way, but to aim for a higher existence. To realise that thought is more important than the passions of the body. It is to transcend the individual existence of the sensible body (which will perish as part of the natural cause of things) for the sake of the universality and objectivity of thought. It is in thought that we find our true freedom, whereas our body, through which our passions affect us, is a kind of tyranny and prison. The fact of death highlights the insignificance of our affairs which torment and worry us. Our deaths could arrive at any time, so we shouldn’t become too attached to our possessions nor try and find meaning in what is inauthentic. To think of one’s death in one’s life is to realise what is and is not important. It is the very possibility of an authentic life.

Learning to Read

To read, to gain knowledge, is not an end in itself but for the sake of self-formation, to understand oneself.[2] This means ridding ourselves of the inessential to find what is essential beneath, and what is essential is the life of reason, for this is what expresses the true essence of the human being. Only in the practice of thought can I truly be free, the rest is the slavery of passive emotions. The aim of all spiritual exercises is therefore the same: return to the true self so as to liberate yourself from the passions that control you from the outside. For the Cynics, the third great school of ancient philosophy, this meant breaking from all social conventions and morality, since society’s rules themselves are only the result of people’s fears and desires and not a true reflection of human virtue.

Even the written masterpieces of philosophy that we still read today are not important in themselves. One reads and writes philosophy not so that one could be clever about it, but that the practice of reading and writing itself is directed towards self-mastery and control. Thus what is important is first of all is teaching (learning how to dialogue) and writing only has a function within this practice. Such then is the origin our own confusion. For us, philosophy is about systems, discourses and books. So when we go back to read ancient philosophy, we are troubled by the absence of systematic thought. But this is because we have failed to understand the context and the reasons for this writing. It was never for the sake of philosophical discourse itself, but the practice of self mastery and freedom.

Why then have we ended up with such a different conception of philosophy as an academic discipline? Hadot’s answer is that with the rise of Christianity as the sole religion of the state there was no reason to have competing philosophical schools all contesting their own interpretation of truth and so they were closed (by emperor Justin in 529 AD). More importantly than this mere historical event, however, is the relation between theology and philosophy itself in the Medieval University. If theology is the source of truth about how to live one’s life, then the only function of philosophy would be secondary. Its purpose was to rationalise the dogmas of religion, but it was religion itself, and not philosophy, that was the guide to life. In the modern age, however, with the rise of secularism and the end of the domination of theology, philosophy as a way of life can emerge once more, and there is no doubt in modern philosophers such as Kierkegaard, Nietzsche and Heidegger (and even Foucault in more recent times), we see that philosophy again has a direct bearing on how one lives one life, rather than being an academic discourse. Of course one might wonder, if this is the authentic voice of philosophy, what academic philosophy in universities is meant to be and whether it truly can take up its spiritual vocation.

Works Cited

Hadot, P., 1995. Philosophy as a Way of Life : Spiritual Exercises from Socrates to Foucault, Oxford: Blackwell.

[1] Worse than this it might even sound stupid as much of the industry around spirituality is.

[2] This conception of education is entirely absent from our current society which tends to believe that that the only function of education is to earn more money. See for example Lord Browne’s report on the funding of Higher Education in the England (the basis of the privatisation of universities), which can only conceive of education as a private economic benefit. See, http://goo.gl/CrRYl.

The Problem of Induction – Lecture 3

October 16, 2015

HumeThe justification of science appears at first glance to be the generalisation of experience. I heat metal x and see that it expands, I heat metal y and see that it expands, I heat metal z and see that it expands, and so on, such that it seems natural that I can claim that all metals expand when I heat them. Most scientists think this is what a scientific argument is, and most would also think this is what we might mean by objectivity. There are, however, two questions we might ask of them. First of all, does the inductive method really produce knowledge, and secondly even if it did is this how science itself operates in its own history?

Let us take the first question first, because it is the more traditional problem of induction, and has its canonical form in the argument of Hume. To understand his problem with induction we first of all need to understand his epistemology. For Hume, there are two kinds of propositions: relations of ideas, and matters of facts. In the first relation, the truth of our ideas is confined to our ideas alone. Thus if you understand the concept ‘bachelor’ you know the idea ‘unmarried man’ is contained within it. When it comes to matters of fact, however, we have to go beyond our concepts to experience. They tell us something new about the world and not just the ideas we already know. A matter of fact would be that Paris is the capital of France, or metals expand when heated. Of course when you know the idea then you know what is contained in it, but to obtain the idea you first of all have to get the knowledge.

There can be false relations of ideas as there can be false matters of fact. Thus if you think that a whale is a fish, then you have made an error about a relation of ideas (you don’t know that a whale is a mammal), and if you think that Plato died in 399 BC, then you have made an error at the level of facts (Ladyman 2002, p.32). Relations of ideas can be proved true by deduction since the negation is a contraction. Basically relations of ideas are tautologies, you cannot assert that Peter is not a bachelor at the same time as asserting that he isn’t married as well, since being unmarried and being a bachelor are one the same thing. On the other hand, matters of fact cannot be proved by deduction, but can only be derived from experience and their contradiction is not a fallacy. If I say that Everest is the tallest mountain on Earth, none of the terms have a logical relation to one another, so I could assume that there is taller mountain. I would have to experience the different tall mountains on Earth to know which one was tallest or not (Ladyman 2002, p.33). For this reason Hume was extremely sceptical about what one could claim to know deductively. All that one could claim are logical relations between concepts that we already known (whose origin anyway would be the senses). What we cannot claim is to produce new knowledge about the world simply through examining our concepts (as theology and metaphysics is wont to do in his opinion).[1]

These distinctions seem very straight forward and at first glance appear to back up the inductivist view of science. The problem for Hume, however, is whether the idea that matters of fact could have the same necessary conclusions as relations of ideas, as the idea of expanding metals as a universal law implies. The key to this problem for Hume is whether I can assert that what happens in the past is a certain kind for what will happen in the future. I have experienced the fact that the sun rises every morning. Does this give me the right to say it will rise again tomorrow, when I haven’t actually experience this dawn yet? If it does rise then I will be certain, and in terms of the past, I know that it did rise, but now can I know that I will rise again tomorrow? It is perfectly possible, even if it were unexpected, that the sun might not rise.

Induction for Hume is based upon causal arguments. Our only knowledge of cause and effect is through experience itself because there is no logical reason why any causal relation should hold or not hold. I know matches cause fires, because I know that from experience, not because matches logically contain fire. Just as we can only infer future behaviour of the world from the actual experience of the world, then we can only understand the category of causality from experience. In other words without experience we would not have the concept of causality as a generality. If I always experience the dawn as the rising of the sun then I conjoin this events. If A always follows B, then I will say that A causes B. This because I believe that the future always follows the same path as the past. So that if A happens, then B will happen. Linked to conjunction is contiguity and precedence. Contiguity means that B follows A in time and space, and precedence is that the effect is always after the cause. (the flame is after the lighted match and not before). It is because of conjunction, contiguity, and precedence, that we feel that we have good reason to say that A causes B, or that the sun will rise tomorrow. Hume assertion, however, is that this can never be a necessary reason, as is suggested by generalisation of a universal law however compelling I feel this causality to be.

Take the example of billiard balls, which seems the most basic relation of causality. The ball X hits the ball Y and causes it to move. But what do we mean by that? Do we mean that the ball X makes the ball Y move or that it produces its movement? We think there is a necessary connection between the two events. X moving and Y moving. What we experience is conjunction, contiguity and precedence, what we do not experience is some mysterious ‘necessary connection’. What we see is ball X and ball Y, what we do not see is some other third thing (like an invisible connection, indeed what we do not see is causality). What does it add to our explanation of the events, even if we were to add this mysterious cause. Wouldn’t the ball X and the ball Y just move in exactly the same way?

The point for Hume is just because two events have always in the past be conjoined, does not mean that we can be universally certain that they will always do so. The conclusion of inductive argument could be false but that would never make it invalid (indeed it might make it more interesting, as if the sun did not rise the next day), but this is never the case with a deductive argument if the premises are true, then the conclusion is necessarily true. What underpins the inductive generalisation is the belief that nature is well ordered spatially and temporally, that what happens many times will happen again in the same way. But that is just an assumption. Why must the future always be the same as the past and it certainly is not a logical contradiction if it were not.

Now of course we make these kind of inferences all the time, and Hume accepts that. I probably would not be able to live if I really though the sun would not rise tomorrow every time I went to bed. But this uniformity is a result of our psychology (perhaps it is an evolutionary trait) rather than reason or logic. We find regularity in nature because our habitual associations of events, and not because these events are necessarily connected.[2]

There is no doubt that Hume’s problem is very profound and does make us look at induction more critically, but we might think that the idea that science itself is inductive in the simple way that inductivism implies is too simplistic. It is important to note that this is a very different critique from the methodological one. In the first case, we investigate the method of induction, and like Hume say that is flawed, or might even argue that Hume’s own account of induction is not a correct description of induction.[3] Whereas in the historical account of science, we are arguing whether the description of method is actually how scientists themselves work. One is a description of the content of scientific knowledge, the other is a description of the activity of scientists themselves. Do scientists really act the way that Hume’s example suggests they do? This is a completely different way of doing philosophy of science. For it does not first of all describe a method of doing science and then apply it to scientists, rather it examines what scientists do and from that derives the method. We shall see that this way of understanding science is going to be very important to Kuhn.

Why might we think that scientists do not use the inductive method in the way that induction has been described so far? Take the example of Newton’s Principia (Ladyman 2002, pp.55–6). Newton presents in this work the three laws of motion and the law of gravity. From these laws in explains natural phenomena like planetary motion. He says that he has inferred these laws through induction from observation. Now it is French philosopher of science Duhem that points out that there is a problem with Newton’s explanation. The data he is using is Kepler’s. His data proves that the planet will move in circles, whereas Newton’s in ellipses. This means that he could not have inferred gravity from Kepler’s data, rather he already the hypothesis of the law of gravity to interpret Kepler’s data. Again Newton’s first law state that bodies will maintain their state of motion unless acted upon by another body, but we have not observed a body that has not been acted upon, so this law could not be obtained through observation. Even Kepler’s theory could not have be derived from observation, because he took his data from Brahe, but could only organise it by already assuming that planets moved in circles, a hypothesis he didn’t receive from data, but from the mystical Pythagorean tradition.

So there are two reasons why we might be sceptical of the simple inductive explanation of science. One is methodological through the problem of induction (though we might come up with a better inductive method to solve this), and the other is historical, that science does not work in the way that theory of induction describes. I think the latter is the more serious issue than the former. For in the end science is what scientists do, and not what philosophers might idealise that they do. If you like, the problem of induction is a problem for philosophers. It isn’t one for scientists.

Works Cited

Ladyman, J., 2002. Understanding Philosophy of Science, London; New York: Routledge.

[1] A group of philosophers from the 20th century called logical positivists also liked this distinction, and differentiated mathematical and logical truths, on the one hand, and science on the other. Anything that didn’t fit this schema was said to be nonsense or meaningless. I am not sure that Hume would have gone that far.

[2] Kant’s argument against Hume is that causality is not merely a habit of the mind but a necessary part of our representation of the world. It would not make sense without it.

[3] This is what Ladyman does when he lists all the different ways in which we might counter Hume, the most telling being induction as the ‘best explanation’ (Ladyman 2002, pp.46–7).

Sexual Difference in Freud

October 12, 2015

FreudFreud was supposed once to have said at a party, ‘What does a woman want?’ (in German, Was will das Weib).[1] Why should we think that women would know less what they want than men do? We might want to dismiss out of hand Freud’s remark as being sexist. Obviously there are many places in Freud’s work that one could find evidence for such a thing, and this would just be one more example out of many. I don’t want to defend Freud in this regard. I would think it would be very hard for someone at that time , from our point of view, not to be sexist, and Freud is hardly special concerning these matters. After we have made our accusations, however, there might be something more interesting to say. I am reminded of something that Adorno said about Freud that when he is his at most exaggerated that is when he is true.[2]

Why was it that most of Freud’s patients were women? Do we have to look at the answer to this question in some aspect of Freud’s personality? Is not the real answer that it is entirely unsurprising that if you were an educated women of the early 20th that you would not have been driven quite literally hysterical? The fact that Freud’s treatment room was full of women tells us nothing about women (that women are more susceptible to hysteria than men, for example), but tells us everything about the society they lived in at the time, which pretty much closed off every opportunity to them. Take for example, the patient at the heart of Freud’s first case study (though it was his friend Breuer’s patient), Anna O., whose real name was Bertha Pappenheim, who later became had a leading role in the development of German social work. Could we not say that her symptoms were caused by the world that she lived in? The real question, then, is why was this world more damaging to woman than men, and is it still so today? What then is the difference between men and women that one could be more damaged than the other? The answer to this question, I want to convince you, must be social rather than biological. There is nothing in the nature of women that would make them less equal than men.

In speaking, thinking and writing about sexual difference, you might imagine that the most important word in this expression is ‘sexual’ and not ‘difference’, since after all what we are interested in is sex. Yet, to understand the possibility of their being two kinds of sexes, one first of all has to know what kind of difference it is that you want, because this choice will determine completely how you understanding your own sexuality. There are two ways that we can think this difference. Either we think that it is real, or we think it is symbolic. In the first case, difference is determined by nature. This is a very old idea, even though today the new language of genes and evolutionary psychology might dress it up in apparent objective and neutral discourse. The difference between men and women, then, has been laid down in 250,000 year ago when the human species first emerged, and the whole search for equality and justice between the sexes is just liberal wish fulfilment. What might make us a little sceptical about this thesis is that the behaviour of our distant ancestors, which we know very little about, just happens to be exactly the same as the prejudices of our more traditional and conservative fellow citizens. In the symbolic universe, on the contrary, it is not nature that determines the difference between the sexes, but language; that is to say, sexual differences are symbolic, and if there is a biological element within sexuality, then it is moulded, shaped and transformed by social and individual pressures and forces that interpret and place a certain value on them. This is the line that Freud takes, but we might conclude that he does not take it far enough, because he still wants to look for something universal that determines the difference between the sexes, even though it is no longer natural. Or if we want to be more precise, it is not that he stills seeks for something universal that makes his interpretation of sexual difference finally inadequate, but that he finds it in the wrong place. This is why we’re going to end with Lacan (well at least Lacan as he is reinterpreted by Zizek).

It is to Freud that we must thank for the invention of the symbolic interpretation of sexual difference. It is in his Three Essays on Sexuality, where we first see a committed and resolute argument against a biological and natural interpretation of human sexuality, which only sees sexuality in teleological or utilitarian terms. We only have sex for the sake of something else, for procreation or serial monogamy. For Freud, on the contrary, human sexuality is highly complex and differentiated, and what we find sexuality expands well beyond any purpose or useful value, a general sexuality, which he called ‘polymorphous perversity’(Freud, 1991, p. 109). To understand, however, the meaning of this perversity, we have to go back to the genesis of human sexuality. How is it that the child becomes a man or a woman, and takes on sexual difference, which is something that we are born into rather than are?

First of all, this is not primarily a biological process, although biology, of course must come into it, but an accomplishment. You have to become a man or woman in the full sense of the terms. You aren’t just naturally a man or a woman. The key essay for us here is a much latter work of Freud’s, ‘Some Psychical Consequences of the Anatomical Distinction between the Sexes’(Freud, 1991, pp. 323–43). When we look at this essay in detail, we can see that there are two different series one for the girl and one for the boy. This series is not innate as such. This means that you have to live in a society, and society, as such, determines how this series works. Now Freud has a word for how this pressure of society works and it is the Phallus. It is very important not to confuse this with the penis. The phallus is not biological, but symbolic and as we have said above what characterises human sexuality is that it is symbolic.

Of course this does not mean that there are no biological elements in human sexuality, otherwise it might be hard to imagine how we could have ended up with the difference between the sexes, but this difference is not enough by itself to explain the complexity of our sexuality (what it is to be a man or a woman), however many chimpanzees one looks at. Our biology is always interpreted through a symbolic universe, which is given in advance and determines how we are going to interpret the fact we have a penis or we do not.

It is Freud’s absolute conviction that we live in a male society. Many people will say that he is sexist, and when I tell you about his theories about human sexual development you might agree with him, but I think he is quite correct about this. We do live in a male society.[3] It is certainly the case when Freud was writing (it is no surprise, as we said right at the beginning, that nearly all his cases where women) and I think it is still the case now, even though there might have been all kinds of advancements in the meantime in terms of the law and work. If we do live in a male society, then being born biologically a girl means that you are going to be seriously disadvantaged from the start and this drawback has nothing at all to do with biology, but how this biological destiny is interpreted. Or in Freud’s words, how the logic of the Phallus operates on one’s sexual development.

Let is then see how Freud himself explains how one becomes a boy or a girl; that is to say, how one ends up fulfilling one’s destiny and become what one already was. First of all let us take the series of the little boy. At the earliest stage of the child’s relation to the parents, which Freud calls the ‘phallic phase’, there is no distinction between the sexes. This is because what determines one’s sexual identity is the object of ones desire and it is clear that both the girl and the boy have the same object which is the mother (or more precisely the mother’s breast). For the little girl, however, to become a woman, she has to change her object of desire from her mother to her father. The explanation of this transformation is given by what Freud calls the ‘masculine ideal’. It is this ideal which gives to the physical differences of the sexes their negative and positive significance and explains the divergence in their development: from the phallic to the Oedipal phase to the castration complex and its dissolution for the little boy; from the phallic phase to penis envy to the Oedipal complex for the little girl. You might notice in these divergent series that the little girl never leaves the Oedipal complex.

What one has to understand, however, is that none of this makes sense without the masculine ideal being in operation from the very start. It is this ideal which ensures that the development of the two series is divergent, and at one end we end up with the little girl and at the other the little boy. For why would the little girl feel different in this way unless she did not measure herself against the masculine ideal? Now such an idealisation cannot be made sense of biologically. Sure there is a difference between the sexes, but that is not sufficient to explain why having a penis is a good thing and not having one is bad. The possibility of such a structure of idealisation is not to found in our bodies but in language, and how it already structures our experience of them, and how the little girl experiences her body as lacking something which then affects the rest of her psychological development.

It is Freud’s disciple Lacan who, following the teaching of the French linguist Saussure, who showed that this process of the sexual differentiation depended entirely on the structure of language and not on our biological fate alone. For Lacan, Saussure’s fundamental discovery was that language was divided between the function of the signifier and the signified. The signifier being the word itself and the signified what the word represented or signified. Such a difference is not important in itself, but the realisation that the signifier can operate without the signified. It is this separation of the two aspects of language that explains the possible existence of the ideal which can structure our experience. I am already immersed in language before I speak through the others who speak to me and the culture they bear in this speaking. These others name and place me in the division masculine/feminine. I will then constitute myself through this placing. I refuse or accept it. This is the law and I become a subject through it. Women have a different relation to the law, and this has ethical consequences. For, to some extent, if there if femininity exists, it is only because she escapes it, though this might only be expressed negatively, as it is in Freud’s text.[4]

No one more than the Slovenian philosophy Slavoj Zizêk has explained more clearly how this split works and the example he gives is the ordinary coke bottle (Zizek, 1989, p. 96). How is it that this object, when I look at it, somehow represents the ideal of America? Common sense tells us that the idea of America is first, and then this idea is somehow ‘symbolised’ by the coke bottle. This is to interpret the relation between the two elements however after the relation has been constructed. It is not an explanation of how it is created, for it is clear that it is the coke bottle, which is the origin of the idea of America and not the other way around by capturing something that is rather hazy and ill-defined into a definite object which then can pin this picture of the ideal of the American life down for us. It is not, of course, the properties of Coke which make it this symbol, for there is no reason that such a strange tasting liquid should do that; rather it is its formal function. In this instance, the coke bottle (and it does not have to this object, it could have been anything else), is operating as a pure signifier. It is a kind of like an empty box in which we can project our fantasy of what America is and which can then organise and consolidate this reality. It isn’t that the coke bottle signifies the American ideal, because it could not exist without it; rather it is the place through which this ideal is produced. It is precisely because it doesn’t mean anything, that it is ‘it’, as the advertisement goes, that it can act as the empty signifier through which the idea of America can be coalesced.

The masculine ideal operates in exactly the same way as the coke bottle. There is nothing empirical about the male sex that would make it ideal. Rather masculinity has to go through a process of idealisation through which it can then be translated into a norm by which the status of the two sexes can be measured, the one as positive and the other as negative. Although there is something fixed about sexual differences there is nothing stable about the ideal which fixes our fantasies. One day the coke bottle could just be a container for a strange tasting brown liquid, and nothing else. And equally the male sex may no longer occupy the space of the ideal from which the development of the two sexes is measured. The ideal space is precisely empty. Anything can occupy it, so that one might imagine in the future, for example, a feminine ideal, where the little boy would experience himself as mutilated rather than the little girl. What then is universal is not the masculine ideal, as such, but the ideality which language make possible. Equally, even when an ideal works, it is never a total success. This is why the elevation of coke to an ideal strikes us as a bit corny and over the top. Surely reality just isn’t like that, and we only have to visit the real America to think that what is represents is a fantasy. In the same way, the reality of women is always escaping the masculine ideal all the time and in fact it is the men who are more under its power than her. Reality might be structured by language, but it is always being destabilized by it from within.

Works Cited

Adorno, T.W., 2010. Minima Moralia : Reflections on a Damaged Life, London: Verso.

Elms, A.C., 2001. Apocryphal Freud: Sigmund Freud’s Most Famous “Quotations.” In J. A. Winer et al., ed. Sigmund Freud and his Impact on the Modern World. New York: Routledge, pp. 83–104.

Freud, S., 1991. On Sexuality : Three Essays on the Theory of Sexuality and Other Works A. Richards & J. Strachey, eds., London: Penguin Books.

Graeber, D., 2011. Debt : the First 5,000 Years, Brooklyn, N.Y.: Melville House.

Zizek, S., 1989. The Sublime Object of Ideology, London; New York: Verso.

[1] According to Freud’s biographer Ernst Jones, he was supposed to have said this to Marie Bonaparte who was a patient of his, though this phrase never appears in his work or his diaries (Elms, 2001, pp. 84–8).

[2] What he actually wrote is ‘In psychoanalysis nothing is true except the exaggerations’ (Adorno, 2010, p. 49).

[3] Anthropologists tell us that there have been examples of female societies in the past, but they have long since disappeared with the rise of agriculture and the state (Graeber, 2011, pp. 176–82).

[4] Freud writes, ‘I cannot evade the notion (though I hesitate to give it expression) that for women the level of what is ethically normal is different from what it is in men. Their super-ego is never so inexorable, so impersonal, so independent of its emotional origins as we require it to be in men. Character-traits which critics of every epoch have brought up against women – that they show less sense of justice than men, that they are less ready to submit to the great exigencies of life, that they are more often influenced in their judgements by feelings of affection or hostility – all these would be amply accounted for by the modification in the formation of their super-ego which we have inferred above.’ (Freud, 1991, p. 342)

Induction – Lecture 2

October 7, 2015

BaconLast week we spoke about the difference between science and religion. We said it could be conceptualised as one between belief and facts. The more, we investigated, however, what a fact is, the less certain we became of its status as a starting point for scientific investigation. Common sense might tell us that facts are just out there and we simply observe them and scientific theories are merely collections of these observations, but when we look at the history of science, however, it is clear that this is not how science works. What we take as facts are already determined by the way we understand and see the world, and our observations are equally shaped by this background conceptuality. In this lecture, we are going to investigate the problem of induction, which is probably the classic form of the philosophy of science, and we shall see that we’ll come up against the same barrier again. Moreover the knowledge that science has of the world cannot itself be infallible, because of the very way that it interprets these facts.

Ordinarily we might think that scientific theories are obtained from facts through observation and this is what makes it different from belief. But what does it exactly mean that theories are obtained or derived from facts? How do we get from the one to the other? What we mean here is something logical rather than temporal. We don’t just mean that first of all there is a collection of facts, and then a theory, as though facts were just pebbles on a beach that we pick up. A theory, on the contrary, is supposed to tell us something about these facts before we have even discovered them. It is about meaning and context, rather than just what comes first or second in a temporal order.

What then do we mean by derivation when we speak about logic? We don’t have to go into the complexities of logic here but just the basic form since all we are interested is how theories originate from facts. Logic is based upon deduction. Here is a valid deductive argument, which comes from Ladyman:

All human beings are mortal

Socrates is a human being

Socrates is mortal. (Ladyman 2002, p.19)

1 and 2 are the premises and 3 is the conclusion. You cannot deny the conclusion if you take the premises as true. We can change the premises slightly, however, as Ladyman writes, and the deduction would be wrong.

All human beings are animals

Bess is an animal

Therefore Bess is a human being (Ladyman 2002, p.19)

What is important here is that it’s the form of the argument itself that is wrong. The conclusion does not follow from the premises even if one accepts them. Bess could be any kind of animal. What is positive about deductive arguments is that they are truth preserving. That is, if the premises are true and the argument is valid, then the conclusion is. The problem is that the conclusion does not contain any more information than the premises. It does not tell you anything more about the world and surely this is what science does.

From this is follows that if science is derived from facts then it cannot be done so logically, because logic cannot tell us whether a fact is true or not. If we know there are true facts then we can logically relate them together (logic is ‘truth preserving’), but it is only from experience whether we know they are true or not. Take for example the scientific law that metal expands when it heats. It does not matter how many times that I repeat this, as Chalmers argues, it does not logically follow (as is implied below) that all metals will expand when heated:

metal x expanded when it was heated

metal y expanded when it was heated

metal z expanded when it was heated

All metals expand when heated (Chalmers 1999, p.44)

If scientific theories don’t come from facts logically, then how are they derived? The answer must be through experience itself; that is to say, inductively. What do we mean by induction? First of all the difference between deductive and inductive arguments is that in the latter the conclusion always goes beyond what is contained in the premises, as the example above shows. I can never be certain that all metals will expand when heated, because this is precisely what I assert when I move from a singular instances (this metal expands when heated) to the universal judgement that all do so.

How then can I adjudicate between a bad and good inductive argument in the way that I did with deductive ones? It would seem, through common sense, that I might be able to justify my universal judgements if I go through a number of singular observations. In other words that I observe a large number of samples of metal to investigate whether they do expand or not, and if I observe in this large number that they do, then I would be justified in asserting ‘All metals expand when heated’. Thus the laws of induction would be

1) The number of observations should be large

2) They must be repeated under a wide range of conditions

3) There should be no exceptions.

It is precisely for this reason that English philosopher and scientist Francis Bacon can up with his ‘new method’.[1] First of all this method is negative. The point is that we should avoid falling into bad arguments rather than coming up with new deductive ones. Bacon’s method is rules about how to practice science by avoiding some of the worst errors. These errors he called ‘idols of the mind’: that we tend to see order and regularity in nature when there is none is the idol of the tribe; that our judgements and are shaped by our language and concepts rather than what we see is the idol of the marketplace; and finally that are views of nature can be distorted by our philosophical and metaphysical systems of thought is the idol of the theatre.[2] From this follows the positive content of Bacon’s method that we ought to make observations of nature that are free of these idols. It is from the mass of information gained through observation that we should make generalisations, rather than understanding our observations through generalisations, which he accuses the philosophers of doing. This he calls the ‘natural and experimental history’.

It is important to understand what Bacon meant by observation is not just looking but experiments, and it this emphasis on experiments that distinguishes the new method from the old Aristotelian one. It is experiments that preserve the objectivity of observations. First of all it allows them to be quantified and secondly that they can be repeated by others and thus tested as to their reliability. It is this data from experiments that are then put into tables. To use then example from Bacon of heat: first we have the table of Essence and Presence that lists those things that are directly part of the phenomena of heat; secondly, we have the list of Deviation and Absence, which lists those phenomena that are related to the first but have no heat; and then we have the list of Comparison, where features that have a quantity of heat are listed and quantified. The empirical method is one of elimination. Let us say I argue that the colour white is explanation of heat. Then I would check my tables and I would see that not all the phenomena that hot are white, or that some phenomena that are white are not hot and so on. White, then, could not be part of theory of heat. Through this process of elimination Bacon explained that heat was caused by the ‘extensive motion of parts’, which is not far from the modern kinetic theory of heat.

Bacon believed one can discover the forms that made what we observed possible, even though they were not directly perceivable. These forms where the direct physical cause of what we saw. This was the rejection of final causes, where natural phenomenon where viewed as purposive. The Aristotelian explanation, for example, that stones fall to the ground was because the earthly element sought to fall to the centre of the earth. Teleological explanations such as these are only suitable for human actions (since humans unlike stones do have desires) but not natural phenomena. The ubiquity of physical causes is the major different between new empirical science of the 17th century and the old science of Aristotle’s era that had dominated the explanation of nature for so long.

There are, however, problems with induction. First of all what is the status of the non-observed forms that are the physical cause of what we observe. How can we make a leap from what is seen to what is not seen? It is possible to see how heat might be explained by Bacon’s method since in fact we can see the motion, but how would we go about explaining radiation? Also we see in science that there can be two competing forms that explain the same visible phenomena such as the two theories of light, for example. Bacon does have an answer for the last problem. He says that we ought to set up two competing experiments that would test what we observe and we could see which was the more successful. But this already demonstrates what we might doubt about Bacon’s new method. In this case are not the theories themselves determining the experiments and not what we observe? Bacon says that science is made from two pillars: observation and induction and that we ought to be able to observe nature without prejudice (the prejudices being the idols of the mind). This is perhaps what most people think that science is. We take many particular instances and then we generalise a law. Yet the problem is how we account for this mysterious leap from the particular to the universal. How many instances make a general law and if there is an exception does this mean that law is no longer a law? There are two problems with the principle of induction as Bacon describes it. One is that we might doubt that any observation is unprejudiced. This is not just in a negative sense as Bacon describes it, but also positively, that without a theory it is hard to know what one would observe in the first place. Secondly, we might worry about how it is possible to go from many observations to a general law. Just because X has happened many times before, how do we know we know that it will happen again? This problem of induction, as it is called, and was introduced by the Hume, and has for many made naïve inductivism untenable. We shall investigate this problem in next week’s lecture.

Works Cited

Chalmers, A.F., 1999. What is this Thing Called Science?, St. Lucia, Qld.: University of Queensland.

Ladyman, J., 2002. Understanding Philosophy of Science, London; New York: Routledge.

[1] See (Ladyman 2002, pp.22–5) for this summary of Bacon’s method.

[2] As we can see, what Bacon sees as idols, we might see as unavoidable necessities and this precisely prevents us from accepting the inductive explanation of science.

What is Science? Lecture 1

September 30, 2015

This is a coursemars-lineae-slopes-hale-crater-perspective2-PIA19916-br2 about the place of science in our everyday lives. Most of us are not scientists and do not even plan to be scientists, but nonetheless science dominates our conception of the world. Most of us also, I suppose, believe that science tells us the truth about the world, and that generally what scientists say can be trusted. If we want to know the answer to something, or to a problem, then it is to science we turn. This is not only the case when it concerns nature, but also ourselves. But why do we trust science so much, even when many of us do not do science or have very little knowledge of what it is that scientists do?

Is this because science, for us, has become like a religion? How ironic this might be, since many scientists, if not the most famous (though it is not the case that all of them are so) are atheists and would see science as completely the opposite to religious belief. Think for example of the publication of Dawkin’s The God Delusion and the publicity around it.[1] Here is someone who thinks that science and religion are completely the opposite. Indeed it is the duty of science to rid the world of religion entirely by demonstrating that all religious people are irrational, and worse, violent. Personally, I do not think that science and religion are making the same claims, though there are many religious people who think they do. If religion is a science, then I am certain it can only be a pseudo-science, and can make no proper scientific claims at all. But equally, if religion is not a science, which I think it is not, then it absurd to argue that it is a pseudo-science. Are not religion and science doing different things, and making different claims?

But what do we mean by a science and what does science do that religion does not? The answer might be that science has to do with facts and religion beliefs. But are we absolutely sure what we mean by facts and how are they the basis of the science? A fact seems to be something I observe. I say, ‘There is a table in front of me’ and there really is a table there in front of me. How do I know that? I can, because I can see it with my eyes. Facts then are  something verified with the senses, in this case sight, whereas beliefs do not appear to be so. If a Christian says Jesus was resurrected on the third day, how can it be verified by simply looking at it? At most it is a report, but I cannot verify it myself. Religion does not seem to be about facts at all. It is something subjective, personal and a matter of faith rather than reason.

Is this opposition watertight though? Perhaps not if we think that difference between religion and science is just that one is about facts and the other not. Are facts really that simple? Isn’t there more to facts, so to speak, than meets the eye? If I did not have an understanding of what a chair was, would I see a chair at all? Let us imagine rather than being a member of my culture, where coming across chairs was pretty common, I was born in a tribe in deepest Amazon that had never come across chairs before or even Western civilisation for that matter. Let us imagine again, that for some unknown reason, a chair that was being transported by air carrier fell out of the plane and landed in a clearing in my forest that I used every day to hunt. Would I see a chair? No I certainly would not. No doubt I would have an image of a chair on my retina and that image would travel down my optic nerve into my brain, but I would not see a chair, because I have no concept of chair.[2]

How do we pick up concepts of things? Not simply by looking at them, otherwise we’d be right back at the same paradox again. Rather they are part of the conceptual background that makes up our world, and this conceptual background is something we learn in any given culture.[3] Only in this way can I recognise something as something, rather than just a mysterious object that has suddenly appeared in my world, like the chair in the clearing of the jungle. The meaning of the chair, the fact that I can see it as a chair, is given by the context of its use. In this sense, if we were to apply this to our idea of science, scientific practice might define what a fact is and what it is not in advance of the research itself that is meant to explain these facts. In other words theories are not justified by facts, because in reality theories precede facts.

This is exactly the case when we look at the practice of scientists. They don’t just look at things in isolation and then base their theories upon them, rather their theories already tell them where to look and what they should be looking for in order that they know what the relevant facts are. If you like, facts are not just facts. They are not just perceptions; rather they are perceptions plus understanding, and the perception does not come first, and then the understanding second, but they both arrive together. They are part of the same conceptual or if you prefer, phenomenological whole, how we actually see the world within a given context, whether we are scientists or not.[4]

Science already makes us aware of this because when we think of a fact, we don’t just think about a state of affairs but make statements about a state of affairs and these statements only make sense within a community of speakers that understand them. The fact isn’t that they is water on Mars, but that someone says that there is water on Mars, and that someone else can observe them and agree that they really is water on Mars. There would be water on Mars whether there was science or not, or even human beings. It only becomes part of a scientific theory when some says ‘There is water on Mars’, and then someone else gets a telescope and sees that this statement is true.

Rather than saying that science is based on facts, perhaps it would be better to say it is founded upon statements which can be verified through observation.[5] Yet aren’t we faced with the same problems we found with the chair? What we find as relevant in an observation again will be determined by the conceptual background that we inhabit. Chalmers uses an example from the history of science to explain this (Chalmers 1999, p.16). Before scientific revolution of the 17th century, it was taken as given that the earth was stationary. The observable phenomena seem to corroborate this. When I jump upwards, I do not fall back to a different place on the earth, which would seem to the case if it were moving. Of course the reason why this is not the case is inertia. I and the earth are moving in the same direction and thus the same forces are acting upon us (for the same reason a tennis ball that you throw up in the air in a moving car falls back into your hand, because you and the car are moving in the same direction and speed). But because no-one knew the theory of inertia at the time, what was observed did appear to prove the earth was stationary (and I imagine there are some who still believe this for the very same reason). It is the theory that determines the meaning of our observations, rather than the other way around, our observations determining our theory.

Does this mean that science is just subjective and what you see is just what you want to see? Then there would not be any difference between science and religion, for it clearly is the case the religion is subjective.[6] Rather, what is required, to clearly delineate science, is a better definition of observation. For this is precisely what scientist do. Rather, than seeing observation as something private and passive, where I see the chair and the image is projected on my retina, we should see it as public and active. Active, because the observer is always involved in what they see, correcting and changing their observations in relation to their understanding and interpretation, and public because these observations are always shared with others who can interpret the results.

Chalmers gives us two examples of how scientists actually work (Chalmers 1999, pp.21–4). One is Hooke’s pictures of the eye of fly under a microscope. First of all the image of the eye was affected by the very instruments he was using, such that he had to work out how to use a light source that did not affect what he was looking at (candle light through brine, eventually). Secondly, he published what he saw, and told people how he had seen it, so that they too could do the same for themselves and see if they came out with the same results. Secondly, in the case of Galileo, he saw in his telescope the moons of Jupiter, but he needed to prove them to his fellow scientists. For this he had to modify his telescope so that he could gain an accurate measurement of their trajectory to show that they were moving around the planet, and finally when he had obtained these results he published them, so everyone else could test them for their reliability.

What is important in this process is to understand that these observations are not infallible. The difference between science and religion is not that one in infallible and the other isn’t (however you might want to understand this). On the contrary observation is fallible. What we see is determined by how we look and how we look by the conceptual background we find ourselves in. But anyone can come along and show us that this background is incorrect and it is preventing us from seeing something. What is important, however, is how they do this. They do it by pointing to what is observable when we do change our theories, but also that this hypothesis can be tested by others. They do not do so by simply asserting a belief about something. The moon is made out of cheese, for example. So Chalmers can define science in this way: ‘According to the view put forward here, observations suitable for constituting the basis for scientific knowledge are both objective and fallible’ (Chalmers 1999, p.25). This means that objectivity is not the same as absolute truth, but quite the opposite: what is objective can be corrected and changed through observable evidence, whereas what is subjective cannot. A religious belief based on observation would not be a religious belief at all, but an inferior and poor scientific theory, since it would never be falsifiable. This does not mean that religion per se is inferior. This would be the case only if it were doing the same thing as science. The test for faith is not observation, but existence. To be a Christian, for example, is not to belief X, Y, Z, but to act as a Christian. Only when a Christian thinks their faith is supported by objective knowledge do they come in conflict with science, as for example those who people who think that the creation story is a scientific theory in competition with evolution. The irony, of course, is they are dependent on the very scientific method that they despise, for one can only disprove a science by another science.


Ayer, A.J., 2001. Language, truth and logic, London: Penguin.

Chalmers, A.F., 1999. What is this Thing Called Science?, St. Lucia, Qld.: University of Queensland.

Dreyfus, H.L., 1991. Being-in-the-world: a commentary on Heidegger’s Being and time, division I, Cambridge, Mass.: MIT Press.

Gardner, S., 2006. Kant and the Critique of pure reason, London; New York: Routledge.

Jebens, H., 2004. Cargo, cult, and culture critique, Honolulu: University of Hawaii Press.

Uys, J., 2004. The gods must be crazy, Culver City, Calif.: Columbia TriStar Home Entertainment.

[1] You can hear his defence of this book on NPR here, http://www.npr.org/templates/story/story.php?storyId=9180871.

[2] I am thinking here of what are called ‘cargo cults’, though the evidence of such practices is controversial (Jebens 2004).There is a famous film about a coke bottle that plays with this idea (Uys 2004).

[3] We might ask further whether this conceptual background is even first. Are we not first of all living in a world before we understand it? This is the basis of Dreyfus’s stress on the importance of Heidegger’s philosophy (Dreyfus 1991).

[4] The key issue here is whether this position would lead to relativism. This depends on how one understands the truth and objectivity of science. This will be at the heart of our reading of Kuhn in the second half of this course.

[5] Such a position is what is called logical positivism, whose most vocal defender is A. J. Ayer (Ayer 2001).

[6] This is not a criticism, for what is subjective is not necessarily worse than what is objective, and indeed the objective might have its basis in the subjective, but it all depends on what you mean by the subjective. This was certainly Kant’s view, who placed practical reason (subjective, though in a special way) above theoretical reason (Gardner 2006, pp.319–25). The ultimate end of reason is not knowledge for its own sake, but the Good. We might call this position humanist.


Get every new post delivered to your Inbox.

Join 1,277 other followers