top of page
natural-sciences-course_106450_large.jpg

Natural Sciences

adapted from theoryofknowledge.net

As befits a subject that is supposed to present us with objective truths, the natural sciences are fairly easy to define, though we’ll start with the OED entry to give us a firm foundation:

​

 'natural science':

  • noun a branch of science which deals with the physical world, e.g. physics, chemistry, geology, biology.

​

This includes our own physiology, of course, so anything in medicine that is involved with how our bodies work also falls into the natural sciences, rather than human sciences. When it comes to our behaviour, though, we’re over the line and into the latter subject area. As far as technology is concerned, this straddles both the natural and human sciences: how we create it belongs in the former, how we use it, the latter.....

How do we acquire knowledge in the natural sciences?

 

1. The scientific ‘method’

​

400 years ago, Galileo set up an experiment to test the hypothesis that objects accelerate when they fall. Experimentation was commonly employed by the Arabs, but their methods were looked down on by the Europeans, who followed the Church’s dictum that conclusions could only be reached by discussions and logic, following Aristotle.

​

Galileo’s reliance on empirical knowledge led Europe into the Enlightenment, and established the scientific method, which is still regarded as the only satisfactory approach when it comes to the acquisition of knowledge about the natural world.

The stages of the scientific method

​

Watch the clip in the video icon on the right and write a short description of each of the following stages of the scientific method.

​

  1. The problem

  2. Hypothesis

  3. Prediction

  4. Testing

  5. Peer review

  6. Publication

  7. Replication or falsification

  8. Theory

  9. Corrections and modifications

  10. Laws

 

Science should, therefore, provide an explanation based on impartial research backed by rigorous checks and balances, and not belief. False scientific evidence should never get past stage 4, let alone become a theory.

2. Serendipity in Science....or making one's own luck?

​

However, rigorous checks and balances, and testing still don't fully explain how we acquire knowledge in natural sciences.  Serendipity is a very peculiar English word, with a very specific meaning. It is a very useful when applied to discoveries in science that have been made ‘by chance’, although as Louis Pasteur said: 

‘Chance favours the prepared mind.

 

The best way to understand the role of chance is to focus on specific cases, and below there is a list of examples from which you can find these.  Choose at least three to research and answer the questions below....

​

- How important is chance to these scientific breakthroughs?  

- However, to what extent were these scientists making their own luck though with rigorous preparation?

- How far do they support Pasteur’s assertion? 

​

  1. Penicillin

  2. The pacemaker

  3. Radiation

  4. Safety glass

  5. Teflon

  6. Saccharine

  7. LSD

  8. Rayon

  9. Viagra

  10. Uranus

  11. X-Rays

​

​

​

 3. The role of induction and falsification

​

We have seen that the scientific method involves formulating a hypothesis. There are many ways in which a scientist may arrive at their hypothesis (including serendipity), but probably the most common one is observationalist-inductionism – that is, observing that a phenomenon has always occurred that way in the past, and inducing that it will always happen that way in the future. Proving this hypothesis to be true will be the aim of their experimentation during the testing stage.


But there is a problem with this way of viewing science. We cannot prove anything with 100% certainty in the natural world, so the purpose of science is not to show that things are true, rather that things are false. If a hypothesis stands up to testing over a long period of time, it is given the term theory. This means that we are not so much interested in theories that are true as we are theories that are not false.

 

 

 

 

 

 


 
 

 

 

 

 

 

The scientist/philosopher who advocated this idea was Karl Popper. In the 1960s he challenged what was then the accepted view that science worked along observationalist-inductionist lines – or, reaching conclusions about hypotheses on the basis of previous results, rather than the potential falsifiability of the idea. According to Popper, nothing that cannot be falsified can be called a scientific hypothesis/theory.  Falsifiability is the assertion that for any hypothesis to have credence, it must be inherently disprovable before it can become accepted as a scientific hypothesis or theory.

​

For example, someone might claim "the earth is younger than many scientists state, and in fact was created to appear as though it was older through deceptive fossils etc.” This is a claim that is unfalsifiable because it is a theory that can never be shown to be false. If you were to present such a person with fossils, geological data or arguments about the nature of compounds in the ozone, they could refute the argument by saying that your evidence was fabricated to appeared that way, and isn’t valid.

Importantly, falsifiability doesn’t mean that there are currently arguments against a theory, only that it is possible to imagine some kind of argument which would invalidate it. Falsifiability says nothing about an argument's inherent validity or correctness. It is only the minimum trait required of a claim that allows it to be engaged with in a scientific manner – a dividing line between what is considered science and what isn’t. Another important point is that falsifiability is not any claim that has yet to be proven true. After all, a conjecture that hasn’t been proven yet is just a hypothesis.

​

For many sciences, the idea of falsifiability is a useful tool for generating theories that are testable and realistic. Testability is a crucial starting point around which to design solid experiments that have a chance of telling us something useful about the phenomena in question. If a falsifiable theory is tested and the results are significant, then it can become accepted as a scientific truth.

 

The advantage of Popper's idea is that such truths can be falsified when more knowledge and resources are available. Even long accepted theories such as Gravity, Relativity and Evolution are increasingly challenged and adapted.

 

To fully research Popper's ideas about falsification and why he thought the acquisition of knowledge in science needed a more thorough approach, watch the Crash Course video below and complete the multiple choice questions and short answer questions attached in the PDF, bottom right corner....
 

However, as with all things TOK, alternate perspectives need to be considered - some people have criticised Popper’s ideas, as it is difficult to show that some theories are false – for example, evolution. Indeed, Popper said of this:

​

Darwinism is not a testable scientific theory, but a metaphysical research program.

​

Moreover, an argument posited by Mano Singham in the ScientificAmerican in 2020 (article on the right) argues that is is the very nature of science that means falsification is an empty process - any scientific discovery is the product of not just one experiment but is composed of multiple other theories...any attempt to 'falsify' therefore is doomed to failure...science needs to focus on whatever works at this moment in time...

​

But the field known as science studies (comprising the history, philosophy and sociology of science) has shown that falsification cannot work even in principle. This is because an experimental result is not a simple fact obtained directly from nature. Identifying and dating Haldane's bone involves using many other theories from diverse fields, including physics, chemistry and geology. Similarly, a theoretical prediction is never the product of a single theory but also requires using many other theories. When a “theoretical” prediction disagrees with “experimental” data, what this tells us is that that there is a disagreement between two sets of theories, so we cannot say that any particular theory is falsified.

​

Fortunately, falsification—or any other philosophy of science—is not necessary for the actual practice of science. The physicist Paul Dirac was right when he said, "Philosophy will never lead to important discoveries. It is just a way of talking about discoveries which have already been made.” Actual scientific history reveals that scientists break all the rules all the time, including falsification. As philosopher of science Thomas Kuhn noted, Newton's laws were retained despite the fact that they were contradicted for decades by the motions of the perihelion of Mercury and the perigee of the moon. It is the single-minded focus on finding what works that gives science its strength, not any philosophy. Albert Einstein said that scientists are not, and should not be, driven by any single perspective but should be willing to go wherever experiment dictates and adopt whatever works.

​

However, the idea of falsification being an integral part of a scientific theory is a very useful way of testing the validity of most scientific hypotheses, and separating the ones that have little claim to scientific legitimacy

​

homework.jpg

4. Are scientists always objective?

​

The scientific method is designed to be flawless system, protecting us not only from ‘bad science’, but also allowing ‘good science’ to emerge and flourish. How well it does this is open to interpretation. To form an opinion on this, read through the Word document attached on a 2013 climate change e-mail scandal.

​

  1. What does the article say about the peer review system?

  2. Why does the article say that there are ‘cracks in the system’?

  3. What is the University of East Anglia’s CRU and who heads it?

  4. What allegations have been made against him?

  5. If these allegations are true, what does it suggest about the scientific method?

  6. Research another scientific scandal, explaining what occurred, why, and the results.

  7. Do you think that scientists have any special ethical obligations that other professionals don’t have? If so, what are they, and why?

 

Concluding questions

​

  1. Which element of our acquisition of knowledge in the natural sciences do you think is the most important? Why?

  2. How important is the role of reason in the process of knowledge acquisition in the natural sciences?

  3. Is the scientific method flawless?

​

...now your task is to read the following article on the origins of COVID by clicking on the book icon

- discuss and note down evidence that convinces and evidence that doesn't - what do you think about the ideas?

- research the writer and the publication - does this alter or reinforce your initial opinion about the ideas - explain

- read either or both of the articles available via the question marks - what do they argue

- how scientific do you think this theory on COVID origins is and why?

homework.jpg
question-mark-1376773633jUs.jpg
question-mark-1376773633jUs.jpg

What qualifies a method as scientific?

​

We have decided that Natural science is more of a system of knowing using method more than it is a body of knowledge, which may prompt us to see it as a way of knowing just as much as an area of knowledge. Anything that adheres to the rules of procedure for the scientific method can be called ‘scientific’. Anything that does not, falls into another category, such as what we call pseudo-science for instance, or superstition.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Pseudoscience has been nicely explained by Daisie and Michael Radner in the Skeptic’s Dictionary. They identify common characteristics as being:

​

  1. The tendency to propose theories as scientific, but which cannot be empirically tested in any meaningful way

  2. The dogmatic refusal to give up an idea in the face of overwhelming evidence that the idea is false and the use of ad hoc hypotheses to try to save the theory

  3. Selective use of data: the tendency to count only confirming evidence and to ignore disconfirming evidence

  4. The use of personal anecdotes as evidence

  5. The use of myths or ancient mysteries to support theories, which are then used to explain the myths or mysteries

  6. Gullibility, especially about paranormal, supernatural, or extraterrestrial claims

​

Other characteristics of pseudoscience could be listed, such as:

​

  1. The promise of understanding everything about you without knowing anything about you

  2. The claim that a single cause and cure has been found for all diseases

  3. The claim that some sort of gizmo can align, balance, strengthen, harmonize, or otherwise positively affect your “energy field”

  4. The lack of scientific studies to support claims made in testimonials or referring to those testimonials when the claims fail scientific tests

  5. The claim that mysterious energies, not yet detectable by scientific instruments, explain how your pseudoscience works

  6. The tendency to be extremely complex, making it very difficult to determine exactly what the pseudoscience can predict and therefore very difficult to test

  7. The tendency to be ignorant of or to ignore alternative explanations for observations, e.g., ignorance of physics or psychology leading to claims about ghosts; ignorance of placebo and non-specific effects leading to claims that a bogus therapy “works”

  8. The tendency of the purveyor of a product to put all his money into marketing and production, and none into research and testing (look especially for those products promoted by celebrities or athletes that either contradict the experts in the medical sciences or claim to be able to magically enhance your intelligence, strength, reproductive power, etc.)

  9. The assertion of completely absurd and stupid statements such as “That’s why we don’t use double-blind controlled experiments: they don’t work!” (Asserted after a pseudoscience fails a scientific test.)

​

​

Find out a little about the following, and explain why we generally consider them ‘pseudo-sciences’ by attaching some of the above characteristics to each one, and explaining why we can't consider it science:

​

  1. The Flat Earth Society

  2. Paranormal investigations

  3. Ufology

  4. Phrenology

  5. Crystal healing

  6. Creation science

​

But before we pour scorn on those who believe in pseudoscience, consider that some things that we accept now have started off as pseudo-sciences.

 

For example, the origins of meteorites being from outer space, and the theory of continental drift – so the line between the two isn’t always as distinct as we may like.  Click below to find out how pseudoscience can actually benefit mainstream, accepted science...

t3zsf25d-1415313656.jpg

How has scientific progress shaped our worldview?

​

1. Linear progression?

​

To answer this question, we first have to understand how (or even whether) science progresses in its acquisition of knowledge. At first sight, this seems obvious: surely we discover new ideas about the natural world, and add it to what we already know, to build up a broader and clearer picture. Then we find more knowledge, and the picture becomes even more extensive. This is how it works in your own studies: you go from knowing very little about a subject, to building up a large body of knowledge about it. Correspondingly, your files go from being thin or non-existent, to bulging at the seams, so that when you come to revise for your exams, the amount of knowledge you have is annoyingly large. This simple accumulation of knowledge is called a linear progression of knowledge.

​

But as soon as you think about the linear progression of knowledge, you realise that it doesn’t work that way when it comes to natural sciences. We have already seen that scientific theories can’t be proven true, they can only be proven false (indeed, they have to be potentially falsifiable, as Karl Popper said, in order to qualify as proper scientific theories), and this is what often happens: a new scientific theory comes along to replace them.

​

So science clearly doesn’t just proceed in a simple linear fashion. New ideas replace old ideas, and sometimes the whole way we view the world is shifted, rather than just modified. Unfortunately, human nature being what it is, people are often reluctant to accept this shift in the way we view the world. As Max Planck said, ‘A new scientific truth does not triumph by convincing opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.’

​

​

​

​

​

​

​

​

​

​

 

 

 

2. Kuhn and paradigm shifts

​

This led the scientific philosopher Thomas Kuhn to propose that science progresses as a result of a series of revolutions. Kuhn said that these revolutions happen after a period of ‘normal science’ in which we view the world in a certain way (this way of viewing the world is called a ‘paradigm’), and practice our science to fit in with that paradigm. When a new idea is proposed that refutes the paradigm, usually it isn’t immediately adopted; rather, the scientist who came up with it is told that his ideas are wrong.

​

But if other scientists have similar hypotheses, there is a build up of opposition to the existing paradigm. This may take years or even decades to gain momentum, but eventually, they will overcome the existing paradigm, and lead to what Kuhn termed a ‘paradigm shift’. Then scientists (and us) will look at the world in a different way, through a new paradigm.

​

Probably the best example to apply Kuhn’s paradigm shift model to is the replacement of the geocentric paradigm with the heliocentric paradigm (but we use this purely as an illustration – be more original in your TOK essay or presentation - need to use examples that will engage and surprise the examiner..).

​

It is clear why people since prehistoric times have believed that the earth is static, and the celestial bodies revolve around us. We don’t seem to move, whilst the stars do. Ptolemy, a Roman citizen living in Egypt (who wrote in Greek…) laid down a seemingly scientific set of observations to confirm this fact in the second century AD, which became known as the geocentric theory.

​

Ptolemy’s paradigm was that the earth was at the centre of the universe, therefore his problem was to come up with a set of rules to work out how the planets revolved around the earth. This he seemed to do after years and years of calculations. Because gravity played (obviously) no part in his paradigm, he could do basically what he wanted with the planets, so their orbits could involve moving at inconsistent speeds and even reversing at points.

Ptolemy’s calculations seemed to work, and were adopted by the Christian Church, forming a vital part of their paradigm about the universe and our place within it. Since the Church preached that God had created the universe and the earth, we, as his most important creations, must have been placed firmly at the centre, and Ptolemy’s calculations seemed to confirm this. As Chronicles 16:30 states, “the world also shall be stable, that it be not moved.”

​

But the Ptolemaic system was never built on solid foundations, and it was only a matter of time before someone came up with something better. Suggestions that the sun rather than the earth was at the centre of the universe – the ‘heliocentric theory’ had been proposed sporadically since the Greek era, but it was the Polish astronomer Copernicus who did the most extensive research on the problem, and came up with calculations and observations to support his idea. He published his findings in what is probably the most famous scientific book ever published, De revolutionibus orbium coelestium (‘On the revolutions of the heavenly spheres’) in 1543.

​

Although Copernicus’s ideas were also filled with problems – for example, the sun was the centre of the universe and the orbits of the planets were circular – his proposals represented a revolution in how we saw the world, and meant, if accepted, that God had not placed human beings in the centre of the universe. But contrary to popular belief, the Catholic Church did not oppose his ideas, and even encouraged him to publish further works.

 

Even the modifications made by Galileo in the early 17th century were initially accepted by the Church – it was only after Galileo had supposedly made fun of the pope Urban VIII in a book weighing up the pros and cons of heliocentrism that the theory was condemned by the Church.

​

​

​

​

​

​

​

​

​

​

 

 

 

3. Paradigm shifts and our perception of reality

​

There are other examples of how scientific revolutions have in an intellectually violent way moved on our understanding of the world – evolution is probably the other best known example.

​

But how does this change our perception of reality? One of the problems is that it’s hard to see the world through another paradigm. Consequently, it’s hard to imagine a world which is characterised by a belief in the geocentric theory, and, for most of us, it’s hard to conceive of an understanding of nature in which God created everything in six days (and rested on the seventh) – evolution, in other words, has become for most of us the paradigm of how we view the world.

​

Perhaps it is easier to consider smaller paradigms.......

​

What was the world like before mobile phones? Before Spotify? Before Facetime?

 

Extend this further: what was life like before medical advances we take for granted, like anaesthetic or antiseptics?

 

What other paradigm shifts in science have you been aware of during your lifetime, or that of your parents?

What paradigms can we talk about in other areas of knowledge?

IMG_0413 2.JPG
bottom of page