The Dunning-Kruger effect

“If you are aren’t confused by quantum theory, then you haven’t really understood it” – Paraphrased quote by John Wheeler/Neils Bohr?

“Ignorance more frequently begets confidence than does knowledge” – Charles Darwinscreenshot_799


Those two quotes, I feel are antecedents to this topic, in that they were the conceptual nuggets that come long before a theory. Although the first quote was really intended to show the counter-intuitive nature of quantum mechanics, I think it applies when discussing this effect.
I found out about this study just the other day, and I felt it gave some structure to various half-thoughts I have had about this topic. Like me, I’m sure you have experienced this.

So what is this effect? In short, it shows the slightly inverse correlation between ‘how good you are’ and ‘how good you think you are’. What the the paper highlights is that, usually people unskilled in an area, as they learn the rudiments, overestimate their abilities  (beginners luck?) and those that are skilled underestimate their abilities. An effect attributed to lack of metacognition in the paper.

I think this study is already quite familiar, in that most people have, at one point or another, experienced this. Usually when we start to learn something (a skill, sport, subject) we tend to be a little too confident in our abilities and as we progress a little we begin to recognise our shortcomings.
I’ll go through the paper and studies Dunning and Kruger carried out 16 years ago and then start a discussion as to perhaps why this may occur. I would love to hear your thoughts on this as well!

The graph in the featured image is a little exaggerated for effect, (see the graph below from the study) but it highlights the effect well. It is quite paradoxical, the incompetent believe themselves to be competent, yet the solution to remove this discrepancy and make them realise they are incompetent, is to make them competent.

The paper discusses the 4 predictions the authors made (I’ve paraphrased slightly):

  1. Incompetent individuals, compared with their more competent peers, will dramatically overestimate their ability and performance relative to objective criteria.
  2. Incompetent individuals will be less likely to accurately gauge competence when they see it – be it in themselves or others. In short, they won’t know what they don’t know.
  3. Incompetent individuals will be less able to gain insight into their true level, by social comparison. Since according to prediction 2 cannot guage the competence in other people, they fail to make the necessary comparison between competent people and themselves and thus make an accurate evaluation of their abilities. (Takes one to know one)
  4. The incompetent recognise their incompetence, (paradoxically) by gaining skills and becoming more competent.

Incompetent is bit of a strong word, what they mean is not people who are incompetent in general (life) but in a particular set of skills which they tested. I just wanted to clarify as I will be using that word with that particular meaning.

They conducted 4 studies to test these predictions, each study consisted of tests that assessed the ability of the candidates in particular areas. Study 1 tested humour, studies 2 and 4 tested logical reasoning whilst study 3 tested english grammar.

The problem with prediction 1 is that, the lower ones score the higher chance of someone overestimating it. So they looked at how much someone’s estimate of their skill was miscalibrated with respect to their test scores.

65 Cornell university students were given jokes that were given a comedic value by several comedians (represents the expert opinion). The students also had to judge the comedic value of the joke and the more their score for the jokes coincided with the expert opinion the better their test score.
This was not asking they found the joke funny as that is quite subjective but whether the joke could be considered funny by most people. Although the test is not purely objective it does give a good indication, since the criteria by which we consider a joke to be funny is if everyone laughs, i.e comedy comes from the harmony of your humour with everyone else.

Screen Shot 2015-08-13 at 18.29.53
This is a graphical representation of the data they collected, and as you can see those who performed the poorest (bottom quartile) grossly overestimated their performance or ability to recognise humour. Although they certainly did not think to be as good as the top quartile they believed themselves to be slightly better than average.

They further investigated the empirical basis, by carrying out test 2 which I think was more rigorous and objective than the previous test. Screen Shot 2015-08-13 at 19.07.11

This time 45 undergraduate students were given a 20 minute logical reasoning test, and were later asked to make three estimates.

How they performed compared to their peers (provide a percentile ranking), their logical reasoning ability compared to their peers, and how many test scores they answered correctly.

The results of test 2 reconfirmed the results of test 1, those in the bottom quartile had gross miscalibration between their perceived ability and actual ability and their perceived test scores and actual test scores.

Studies 3 and 4 were conducted in two phases. The first reconfirmed the results of the first two tests. The second phases were to test predictions 3 and 4.
They both delivered positive results, and confirmed their predictions. I am not going through those tests here, mostly because their method is largely repetitive and similar to 1 and 2 and also because the paper does a good job already of showing the results. (I’ll post the link to the paper below if you wish to read the last two studies)

Limitations of their analysis

Of course, for the incompetent to overestimate their abilities, they must have some rudimentary understanding of the area first.

No one would challenge Ronda Rousey, if they did not have any martial prowess. For the incompetent individuals to overestimate their ability, they would first need to have a certain threshold of knowledge or past experiences that give them the boosted confidence in their abilities, otherwise they would not even have intuition on how to respond.
In other areas, as they mentioned, competence does not depend on cognitive abilities but on physical skills. “One need not to look far to find individuals of impressive understanding of the strategies and the techniques of basketballs, for instance, but yet could not ‘dunk’ to save their lives (these people are called coaches)”

Their view (as highlighted in the prediction) is that those who are incompetent lack the metacognitive skills to accurately asses their true ability. Metacognition is I think what just mean self-awareness. Study 4 does reflect that, however, there are several other reasons why such a result could be seen, other than a deficit in metacognitive ability.

The effect has three main criticisms to the suggestion of a metacognitive deficit in low scorers, which are: Regression to the mean, Regression to the mean plus better-than-average, and the role of task difficulty. 
Regression to the mean is a statistical artefact, whereby performers that score low, will on subsequent tests, score higher i.e closer to the mean, just by pure chance. The fact that the low scorers overestimated their ability is because perceived performance is dictated by more than actual performance. However, there is a lack of symmetry at the higher end, the high scorers should underestimate by just as much, but their underestimation is nowhere close to the amount of overestimation at the lower end.

Regression to the mean plus better-than-average. Krueger and Muller in a study in 2002, explained that people tended to have a excessively positive view of themselves, so the large overestimation at the lower end was a combination of regression to the mean and the self enhanced view. And at the top end, the underestimation due to regression to the mean was offset by self enhancement.

Task difficulty. Kruger and Dunning discuss this in the paper too, the tasks they carried out were relatively easy and hence confirms the effect, however this consistency disputes for harder tasks. There have been studies done on this by Burson and colleagues in 2006, but in the paper they discuss this too. “In some domains there are clear and unavoidable reality constraints.” No one denies their inability to translate obscure and ancient Sanskrit passages, or being able to build a rocket. In fact in difficult tasks like these, the bias is to rate yourself as bad as the next person.
Why is there such a large disparity between actually ability vs perceived ability at both extremes?

Perhaps that is just a function of how we learn. We learn through failure, negative feedback and what have you. But there are many causes of failure and too many variables (that each can cause failure in equal measure), whilst for success almost everything has to function correctly (ability, work, luck etc.) Failure can occur if even one of those aspects underperform. Thus pinpointing what caused the failure becomes clouded and thus it is hard to gauge which factor is bad and which is good and thus what the meaning of competence is in a particular area.

The paper discussed several other reasons why this might occur, one of which is motivational bias. This makes sense to me. If you were to embark upon learning a new skill, say kickboxing. It would be to the detriment of your learning process to accurately estimate your incompetence; you would be discouraged to even begin learning. But instead you might think, “I think I could learn this I am already quite agile and can throw a decent enough punch.” I recently started getting into Muay Thai, but before I started it is safe to say I definitely thought too highly of my conditioning (in painful retrospect).

This may just be a convenient lie we tell ourselves, before facing a challenge whose difficulty challenges our abilities. It may also shed some light on why we require teachers and mentors in our life; why can’t just do it (whatever it may be) on our own?

We need an experienced person to show us our incompetence and remove the cognitive deficit that causes us to blindside true competence. Furthermore, is this reason because we are social creatures or is this one of the reasons that causes us to be social creatures?

This also confirms why when we are young we tend to be brash and more sure of ourselves.
Of course what I have said here is just speculation based on the results that I read. The paper simply explored why and if people do hold “overly optimistic and miscalibrated views of themselves”. Those with limited knowledge suffering from a dual burden; not knowing enough, not knowing enough to know they do not know enough. There have been lots of other papers, providing reasons why this might occur. Perhaps this is just a statistical effect, and not a psychological one.


Whatever the reason for this may be, what should be taken away from this effect is to recognise constantly you are not as good as you think you are, remove the ego that says so and then expand your knowledge. Reinforce the idea that you know nothing.
Is Ygritte secretly Socrates?

For this work the pair won the Ig Nobel prize 2000, Socrates would have esteemed greatly.


what the Dunning-Kruger effect is and isn’t


The problem with Statistics

Statistics remain a powerful tool, and a favoured form of evidence for speakers and writers because they provide  numbers and data to support ideas and conclusions. They are trusted because they are based on quantitative studies, and are accepted as a final word on a topic; an irrefutable truth. But that is a dangerous fallacy, statistics are misused and often the same statistic can be used to argue for and against the same topic. This misuse of statistics leaves them open to interpretations which is fundamentally counterproductive to science that was carried out to produce them.

“Statistics are like bikinis. What they reveal is suggestive, but what they conceal is vital.” – Professor Aaron Levenstein.

When properly interpreted and understood, statistics are highly useful especially if their user is an expert in field.  For example, they are useful for medical researchers wishing to locate the cause and likely source of certain disease. William Chadwick carried out observations on who was contracting cholera and where they were contracting it, during London’s cholera plague in 1831. He was able to conclude the polluted waters of the Thames were responsible and make recommendations that did much to reduce further incidences.

But statistics are effective at misleading the public, very rarely do they add any significant weight to an argument made during a discussion.

Politicians use statistics in the same way that a drunk uses lamp-posts—for support rather than illumination” – Andrew Lang

They are often misreported, and errors are made due to improper or half understanding of how the statistics are actually gathered. The problem does not lie with the statistics themselves but misuse of them by people who do not take into account the assumptions, limitations and biases of the method used to gather them. Let me illustrate my point. I will try to convince you smoking does not cause lung cancer. Here are two statistics for 2014 I found :

  1. 20% of all adults aged 16+ reported to having smoked. (~12 million people)
  2. 1 in 6 or 17% of all deaths per year are due to smoking-related illnesses (which includes lung-cancer) (~80000 deaths)

From the total set of smokers, it then follows, only ~6% suffered from smoking-related illnesses. Which is a very small proportion, and thus the causality of smoking –> lung-cancer isn’t well founded.
However, this isn’t a clever judgement to make. The second statistic is from a epidemiological study, which compares groups of people who are alike except for one factor, and try an find if that factor is associated with the health effect. The main problem with the statement I made was I used a study of association and passed it off a study that measure correlation.
The second statistic really measures the probability that someone smokes or smoked given they have lung-cancer. Whilst the a study that investigates causality measure the probability someone will get lung-cancer given they smoke. This is the kind of misreporting that commonly occurs in many areas. A more useful, and less susceptible to misuse, statistic would lifelong smokers die about 10 years ­earlier than ­nonsmokers. (The New England Journal of Medicine)Screen Shot 2015-08-10 at 15.41.40

To avoid being fooled by such things, we need to have a watchful eye on the nature of the numbers themselves and how they were gathered. Here are few questions you must ask your self when confronted with a statistic in the wild, that go beyond just blind skepticism.

Who found the numbers and why? quotes dr house hugh laurie everybody lies gregory house bandaids house md 1680x1050 wallpaper_www.wall321.com_18
Number’s don’t lie but people do. This question makes sure you address the bias and reliability of statistics. Asking this question is probably the most important, as this influences all the other aspects of data collection that I have talked about below.

Every point of view will use statistics to support their conclusions, and it is your responsibility to be aware of any prejudices made when the study was conducted.

For example, people supporting US military action overseas might ask people “Do you support the attempt by the USA to bring freedom and democracy to other places in the world?” That is an example of a loaded question, it asks questions by taking assumptions as axiomatic truths. It assumes that military action leads to freedom and democracy. So the surveyee is torn between choosing between their ideals and their morals. It skews the answers in the directions the surveyor wants.
Of course, small bias and subjectivity is inevitable, but you need to be sure the study isn’t highly prejudiced.

What did the study exactly measure?
This is sort of what I hinted at in my example with smoking. It is your job to be aware of exactly what a statistic measures; just because they seem related does not necessarily mean they are. It’s apple and oranges and it isn’t. I found a great example, you can compare apples and oranges and you can’t. It all depends on what you are measuring. Could you compare taste? No. Could you compare sugar content? Yes. Redness? No. Roundness? Yes.

How was the study conducted?
A good study needs to be well controlled with a strictly defined subject that also needs to be well understood by the person conducting the study, and good objective data collection.
Some surveys ask leading and loaded questions. Like the example I gave about US military action. If you are asking a yes and no question, it should be just that.
Here are few examples of bad studies (the first one is made up)

“Children with bigger feet spell better! In a survey of 200 children those with larger feet consistently scored more highly on a spelling test” ….well duh! Children with bigger feet also tend to be older, who have more experience with words and tests. Not only was the study not well controlled, (should have tested groups of the same age across; stratified across all ages), but it also took a pitiful sample.

More importantly however, bad studies crop up even in science, particularly in biology and neuroscience, I found an article discussing a particular example: (it is written by author of Bad Science Ben Goldacre)

What if academics were as dumb as quacks with statistics?

I won’t go into the exact details of the statistical fallacy made in the study, he does a good job already (so I do recommend you read it). But what it amounts to is measuring the effect of a chemical on neutron firing rate in mutated mice and normal mice. If you measure a statistically significant fire rate decrease of 30% in mutated mice but a 15% decrease in normal mice that is not statistically significant. You cannot say that the chemical affects mutated mice more than it does for normal mice. But that is exactly what some papers claim.
Sander Nieuwenhuis (the person who identified this) looked at 513 papers published in five prestigious neuroscience journals over two years. In half the 157 studies where this error could have been made, it was made.

IQ tests are another area which really should not be wholeheartedly accepted. That is because the subject is not well understood. What is intelligence? Is it the ability to do long division super fast? The ability to write eloquent poetry and flower prose? The only thing really you can comment off of an IQ test is that, they are good at the the type of questions and problems given on an IQ test.

Who was asked?
This really applies to the social sciences. “Any study that uses human subjects is almost impossible to conduct under laboratory conditions, in which all factors that could effect the outcome of the experiment are controlled, including the variable under study. For a truly statistically valid study showing the effects of television violence on children, the children would have to isolated from all other factors that could have an influence. These other factors would include contact with other human beings, with other expressions of violence (people, reading, radio, movies, newspapers, video games, etc.). This would obviously work to the social and developmental detriment of the children.”
Of course, even the purest of science cannot be entirely objective as long as humans are the ones that carry out the research. But you should be aware of the limitations of social study. They can be useful and provide insight as well, but do still regard them with slight suspicion.

What sort of comparisons are being drawn?
I wish I had an example for this, but I could not find one and it is getting late. When you compare the value of something for place to another, you need tone careful the conditions are the same before you can make any conclusions. Which seems pretty obvious, but there can be subtle difference which can influence the study that are hard to spot.

So there you have it, how statistics can be misused and how you can avoid being fooled by them. This is by no means a post that disdains the use of statistics, after all, they still remains an easy and concise way of delivering information concerning evidence. But be aware, they can conceal a lot and it is good to read up on how to examine them. Yes I do understand, no one has the time to sit down examining every single statement fired at us, and there is a limit to how much you can examine before actually conducting your own study and collecting your own data. By and large we trust the peer review process. But if a statistic sounds fishy, it probably is.


The Problem With Statistics

Featured image – Pintrest Sue Gerry
House Image –

What is the physical structure of memories? What are they made of?

“What are memories but dreams of a better past” – Robert Brault

This is a poignant quote that captures the essence of memories, they can remind us of better times and offer hope when by referencing the past when the present seems dark. They serve as silent advisors to us as we progress in our life; offering retrospect and highlighting experience, and the wisdom that comes with it, allowing us to make better decisions in the future by making us relive our failures. It’s effects are more profound than that, our ability to navigate, locate, listen and retort and process incoming information and above all reason and rationalise are heavily dependent on memory.

Unfortunately that function can fail, and one can’t recall their past, their “current consciences lost to eternity”. Neuroscientist have made several advancements in understanding how memories work and how these functions can fail in an effort to understand conditions like amnesia.
I’ll attempt to read through various literature and understand what constitutes memories and what their, if they have any, physical structure is.This question is something I would really like an answer to. Throughout primary and secondary school, I did not know what fire was, I was told it was just energy which I thought was pretty cool and I carried on thinking it was just an ethereal form of energy with no physical substance underlying it.

Eventually I did come to learn that fire was really a collections of gas molecules and smoke particle and excited ions that radiated light by virtue of their temperature.
Memories, like fire, and even consciousness for that matter seem formless at first. I don’t know about you but memory isn’t really aptly named, I seem to have forgotten more things than I have remembered. Forgotten…then remembered…forgotten..recalled but only fragments. This makes memory seem more a whimsical creature of fantasy than something concrete; which does not really sit well with me as a physicist. After all it challenges my acceptance of physicalism.

Let’s introduce some basic classifications.
The first type is called Sensory memory.
Sensory memory is the shortest-term element of memory. It refers to the ability to retain information gathered from the senses on the order of milli-seconds, even after the stimuli have been removed.
The sensory memory is involved in how we perceive things, infact its short timespan means it is considered part of the perception process and not as a memory at all! But it is an important component in how memories are made and stored in short-term memory. Experiments conducted by George Sperling in the early 1960s showed that the maximum capacity was around 12 items; although the subjects reported to have seen more about the object than they remembered.
Short-Term Memory (which I shall refer to as STM for convenience) is something most people are familiar with. It is a tiny notebook everyone carries with them, jotting down useful things we see during our day for reference.
If you will forgive me, I will compare it to RAM. RAM can store and retrieve data very quickly, so it’s used to help the computer process whatever it’s currently working on. In a similar way short-term memory can have the ability to remember and process, It holds a small amount of information in a “readily available and active state” to allow quick retrieval of information. In 1974 Alan Baddeley and Graham Hitch, proposed a system to model STM, which they called working memory. Here is a schematic:


Updated structure of memory to include the episodic buffer, as of 2000.

The role of the central executive, is sort of like a processing chip. It directs information (gathered from sensory memory) to the three components: the phonological loop, the episodic buffer and the visuospatial sketchpad. It rejects or accepts which information based on what is more useful.
For example, the ability to screen out any distracting noises during a surgery would be thanks to the central executive. According to this model, it acts more of an attentional processor than an actual memory store; whilst the phonological loop and and visuospatial sketch pad are specialised, this one is more generalised.

The phonological loop concerns itself with auditroy information and consists of two components:

  1. The Phonological store with links to speech perceptions, holds speech and sound based information for durations that lasts on around 1-2 seconds.
  2. The Articulatory control process which has links to speech production, silently rehearses what has just been heard and feeds it to the phonological store.

As the term loop suggests, the phonological store again feeds the information to the ACP, and then it is rehearsed again.

The sketch pad, the VSS, is used to store visual information and arranges information that allows us locate relative spatial distances, or visualise, count, analyse aesthetic detail. It is an important tool for navigation. It can also manipulate visual information held in LTM. Try answering this, how many cupboards are there in you kitchen? The image of your kitchen was in the LTM, but to count them them now it was brought up in your sketchpad and essentially ‘recreated’. Which is sort of cool but worrying; the idea that your memories are changed and updated constantly. To discuss that at length goes beyond the scope of this post. But here is an article where you can read up more on that:

The episodic buffer links this information across different domains to form units of visual, spatial, verbal informations that are integrated across different areas of the brain. It is said to have links with long term memory which allows to to extract and retain semantic meaning. For example, when watching a movie or reading a book, it is the function of the episodic buffer which allows us to link the current incoming information with that stored in the LTM.
Just to clarify, this is not a comprehensive model. For example, I could not find any place which explained exactly how the central executive works or its capacity.

Long-term memory
finally enough refers to remembering things for extended periods of time. Even thought we seem to forget things, a lot, LTM capacity is argued to be endless.
Forgetting things then is really a misnomer, it is more of a problem associated with recall which is partly to with the episodic buffer and the working memory. I could carry on explaining the different subtypes in LTM but these are just different types of memories as they apply for different situations. They do not serve to explain the process of making memories, which the working memory for STM does.
The consolidation process, of memories stored in the the working memory, rejects some extra details before storing the memory in the LTM. This show brain creates a relationship of the sounds, smell, sights you see with memory.
For example, when talking to your colleague, you will be aware and remember the colour their tie or what shoes they have on. But if you were to recollect that conversation later some of those details may have been lost. You will remember the topic of the conversation is greater detail than what sort of clothes they had on.

So far all I outlined explains how memories are cultivated, but it does not shed any light on what they cultivated out of. What is the tether that binds memories to the physical world? The work of Karl Lasley and Wilder Penfield shows us that memories stored in the LTM are widely distributed throughout the brain. The process of memory creation even from a biological perspective is heavily tied to reinforcement. Reinforcement usually comes from repetition, or from emotions that a related to the event.
From a purely physical perspective, long term memories are stored in the LTM throughout the brain as groups of neutrons which are set up to fire in the exact way they did when the memory was created. These are stored in the area that was most strongly tied to the memory (the neurons in the visual cortex store sight etc.), and these can be stored multiple times redundantly. Which again is reinforcement, and helps as a contingency against potential defects.
When we recollect events, these different patterns fire and memories are brought into he working memory to be manipulated and remembered. There can be two types of recollections, which are called implicit memory and explicit memory.
Implicit Memory is where we recollect events without any conscious effort to help us perform tasks. Such as riding a bike.
Explicit Memory is when we intentionally recollect events and experiences. We do this on a daily basis, such as remembering appointment times.

Okay so on a physical level, neurons repeat patterns which a certain event triggered. This is how memory is stored, but what causes them to fire in a certain defined pattern, thereafter? Also, does our brain gain mass when we make memories?

Eric Kendal, an American neuropsychiatrist, wrote about the physical manifestations of memories in a paper titled ‘The Molecular Biology of Memory Storage: A Dialogue Between Genes and Synapses’ in 2001.

According to the paper, “with both implicit and explicit memory there are stages in memory that are encoded as changes in synaptic strength and that correlate with the behavioral phases of short- and long-term memory. The short-term synaptic changes involve covalent modification of preexisting proteins, leading to modification of pre-existing synaptic connections, whereas the long-term synaptic changes involve activation of gene expression, new protein synthesis, and the formation of new connections.”
The creation of new proteins which mediate the the synaptic changes that allow the neutron firing pattern which lead to memories is an interesting idea and result. Perhaps then our brain does gain mass, although must be quite small, when gain information. Although, I am inclined to think that may be a wrong interpretation, because if our brain gains mass as we gain information then it appears to break conservation of mass.
Where did the mass come from? I don’t know I suspect I shall have to do further reading to gain a better perspective.
But to conclude, memory creation manifests itself on a biological level as creating of new proteins or modifications of existing ones. Like computers  store information as binary which is really just manipulated electric currents using logic gates, our brains store it by manipulating synapse structures and creating proteins.

References and further reading :

How We Remember, and Why We Forget (featured image)

Evolution : A rundown of the most common misconception.

Everything needs a little debunking it seems and there certainly hasn’t been a lack of debunkers (if that is a word) for the theory of evolution. But sadly what they recognise as flaws of the theory, are just misconceptions. So here I attempt to thwart some commonly held beliefs about the theory of evolution. I had these conceptions myself in the past and a little extra reading corrected them for me. I still notice people with the wrong idea of how evolution works which becomes apparent when they are discussing it. Perhaps these misconceptions about evolution provide a set of reasons that cause many to reject it; they have the wrong idea of how it works. In fact what troubles me most is that I’ve seen people who accept evolution but still have some of these misconceptions, and that is a problem because that means they accept it without being aware of what evolution actually is.

So here goes, in no particular order of importance :

  • Evolution is ‘just’ a theory. theorywere
    A beloved reason for all sceptics and almost all debunkers get hung up for quite a while on the word ‘theory’. As my friend Inigo Montoya points out
    There is large difference between the casual use of the word ‘theory’ and the scientific use. This is more broadly a misconception about the scientific process.
    Most people imagine a hierarchy that goes a bit like this:
    Hypothesis –> Theory –> Law
    This implies that a theory is just a guess of how things work and has not been proven, and if they are, they graduate into laws. This really is an incorrect view. Let’s introduce a few semantics and settle the actual definitions of theories and laws.
    are predictive and declarative statements about the behaviour that is based on observation, which can also be falsified on further observations. Take an example of Newton’s second law : The force on an object causes the object to accelerate. What he actually wrote was “The alteration of motion is ever proportional to the motive force impress’d; and is made in the direction of the right line in which that force is impress’d.” Which means when you apply force to something it changes its motion in the direction of the force. Which is pretty obvious to anyone who has lived; when you push something it does indeed move. Thus it is a law, because it always happens.
    A theory provides a, or a set of mechanisms (that can be tested) based on the best evidence gathered hitherto to explain the natural phenomena. If you will, it explains why the laws are what they are. Einsteins theory explains the fact that light has the same speed in every reference frame. It rests on multiple laws and evidence such the Michelson-Morley experiment. In the same way, Darwin’s theory explains the fact that organisms aren’t the same as their parents and that new traits emerge and these can proliferate through generation if proved to be advantageous. A theory is not just a hunch, that is the hypothesis (which an experiment tests) , a theory combines several laws with a mechanism that explains them.
    Laws and theories go hand in hand in science. To summarise, a law says something happens a theory explains the nature in which the thing happens.


  • Evolution makes species harder, better, faster, stronger. (I’m on a massive Daft Punk kick right now)
    This arises when you look at the evolution of humans. We seem to have always improved and now we are at the top, with the best brains. But evolution is not a linear chain of events, it is a tree and we are just one of its branches. The process of natural selection does not progress towards a perfect being of a particular type of organism. This is because of two reasons.
    Numero uno : natural selection is not an all powerful sentient being, it is just name we have given to tendency of processes that just are. From a vast selection of genes, those that help an organism to survive in a particular environments or conditions, are passed on.
    But to get those genes passed on, you do not have to be the ‘best’ you just need to good enough. Sort of like separating a mixture of sand, soil and water with a sieve. The particles just need to be small enough to pass through, any smaller and it is just wasted effort. Survival of the fittest should really be called survival of the fit enough. It doesn’t have the same ring to it though does it?
    The second reason is that perfect is a relative term. (Don’t worry I’m not about cop out by saying it is all relative). Thus it is hard to define progress when it comes to evolution; if you aren’t sure what progress is how can you be sure what the end goal is? Evolution allows organism to just be, if their genes help them survive, they survive and vice versa. Conditions can change and the environment can change and species can evolve ‘backwards’ just as easily as they go ‘forward’.
    Take this as an example, before the invention of modern medicine, genes that allow juvenile diabetes to occur would have been selected against. Thus would not have survived as long, but now we can treat diabetes with insulin so now the gene that would been ‘weak’ before is now just irrelevant in terms of the survival of our species.
    Moreover, evolution is not a linear progression from one species to the next. We didn’t strictly evolve ‘from’ Neanderthals in the sense that all ‘human’ (I use the term loosely) were once Neanderthals and then evolved into Homo Sapiens. They both co-existed for along time. It is just that Neanderthals could not cope with the changing conditions.
  • Evolution is progressive; weaker genes are always removed.
    I guess this is run on from the previous point and again *sigh* weak and strong are relative terms. But let me make my case through a different reason. Evolutionary fitness and everyday fitness are different things. Evolutionary fitness is mostly concerned with the ability of an organism to pass on its genes into the next generation by the process of ya know *cough* getting laid.
    A weak bird that has a magnificent and bright plume of feathers might attract the lady bird (oh god I just realised how that reads) more than a strong bird whose feathers are slightly duller. If an organism gets its genes into the next generation it is termed fit.
    To the disappointment of several people, stupidity is really not going to be removed from human gene pool. I’m looking at you Darwin award.
  • Evolution disproves God.
    This misconception also arises from the understanding that evolution explains the origin of life, which really isn’t the focus of the theory. Evolution and religion are really not at war, and the theory is largely irrelevant in the debate of whether God exists.
    In fact one of the first proponents of evolutionary theory, after the publication of ‘On the Origin of Species’, were Catholic scientist Jean-Baptiste Lamarck and Augustinian monk Gregor Mendel. Now I know what you are wondering, that is a really misleading title for a theory which claims it does not focus on the origin story. Well the full title is ‘On the Origin of species by Means of Natural Selection or the Preservation of Favoured Races in the Struggle for Life‘.
    It really is dealing with process behind the creation of species as we know them today from what they must have originally been, and species as a concept is a human invention so it really does not shed immense light on how life itself was created. In fact in 1950 Pope Pius XII declared there is no intrinsic conflict between the faith and theory. It leaves plenty of room for God.
  • Evolution proves God.
    In the same way that evolution is irrelevant in disproving God, it is also true for the view that it proves God. This belief arises due to a combination of misconceptions. These include “Evolutions makes organism that are perfectly adapted to their environment”, “Natural selections gives exactly what organism need to survive and acts for the good of the species”. It is hard to deny the existance of sentient being with those ideas. I mean it all seems soooo perfect.
    Let’s undertake some redefinition.
    The first, I hope I already corrected; perfect is a relative term and organisms just have to good enough yadda yadda yadda.
    Natural selection does not ‘give’ anything, it only seems that way in retrospect when we look at evolution as a whole. If some organisms in a population have a gene variation that allow it to survive, their genes will be passed on. If none of them have it they will die. The only reason we think natural selection cradles life and gives a helping hand is that by and large we hear the most about species that survived, so it appears they benefitted form natural selection. We hear very little about those that died out and thus we don’t realise natural selection does not always ‘give’ species what they need. Our tendency to think that says more about the humans habitual search for patterns and order from seemingly random events than it does about the apparent altruistic nature of evolution.
  • Evolution is all random.
    Evolution is certainly not all random. There are elements of randomness in terms of the genetic mutations that organisms have and that really is to do with statistical nature of the world. But that is only half the story, just the word selection alleviates the process from all the randomness. Natural selection is very much influenced by climate, environment geological activity, temperature fluctuations etc. Now something so heavily dependent on all these events is not entirely random. But do not make the mistake of thinking it is also a carefully planned and constructed process, after all it can only select from a gene pool that itself is mostly random.
  • Humans can’t influence evolution.
    This is a very pernicious idea, that because evolution is so slow humans cannot influence it. More dangerously, climate change will not destroy or make extinct species because they will just evolve and adapt. Humans can infect do influence evolution. A lot. We cause so many changes in the environment we are frequent instigators in the evolutionary processes of many organisms.
    Take elephants for examples, we value their long tusks so we kill those that have them. Those that have short tusks survive, in short elephants will eventually evolve to have shorter tusks.
    Bacteria, malaria, HIV and cancer have all evolved to be resistant to our drugs. MRSA would not have happened without humans. More importantly, species and organisms will not just ‘adapt’ to climate change and destroy ecosystems. As I said before, natural selection does not give organisms what they ‘need’. If a population does not contain genes to overcome those challenges, those organisms will die out.
    If the population of Polar bears does not contain genes that allow them to hunt with fewer and fewer sea ice, they will die out.
  • The theory is flawed and scientists keep adjusting it to make up for those flaws
    Again this is really just a misconception about the scientific process. A theory isn’t flawed, it just does not have the whole picture. Science is continued endeavour, a topic does not end with a theory that explains it. Scientists gather more evidence and then evaluate the assumption of the previous version of the theory against the evidence and then refine it if necessary.
    Just because evolution – like all theories – is being refined does not make the main tenants false. Science isn’t politics, it has the liberty to go back on its word and not be ashamed.
    All scientific theories are works in progress, I would even go so far as to say that science is work in progress. There is always more evidence to gather further refinements to make, and thus more to learn. What we know and teach about Newtonian Mechanics is vastly different from the understanding people had when Newton first published his ideas. Several other people contributed to shape the topic into its modern form.
    The theory of evolution may have flaws in some niche areas concerning specifics of certain organisms, but its core principles aren’t.

So there you have it, these are the misconception that have become apparent to me over time, I hope I have successfully explained why they are wrong. And also why it is important to rectify these misconceptions as having a wrong idea of theory and rejecting it for that reason is quite counterproductive to science.  But perhaps I haven’t and Unknownyou still disagree, if so leave a comment on what and why you disagree with what I have said here and lets have discussion. But please keep it civil, it wouldn’t bode well if we turned the comment feed into an X-box live argument.

Here are a few links to show where I got some examples I mentioned here : (where I got the second picture from, it is the thumbnail from the video)