http://smmitch.wordpress.com/2012/03/25/sona-love-it-hate-it-useful-or-useless/#comment-73

http://laurenpsychology.wordpress.com/2012/03/25/should-psychology-be-written-for-the-layman-or-should-science-be-exclusively-for-scientists/#comment-99

http://kpsychb.wordpress.com/2012/03/25/grandparenting-a-child-with-a-disability-an-emotional-rollercoaster/#comment-43

http://cerijayne.wordpress.com/2012/03/24/popps-good-or-bad/#comment-73

Advertisements

CONVICTED ON STATISTICS?

March 25, 2012

In 1999 Solicitor and mother, Sally Clark was convicted of murdering her two children. The Statition allocated to the prosecution presented the jury with outrageously miscalculated statistics. Because of this, Clark was sentenced to TWO life sentences. Her story has since been published as a book by John Batt and is truly heart-rending.

Clarks first child Christopher, died three years prior to the ’99 trial, at age 3 months. His death had originally been treated as arising due to natural causes, some said “Sudden Infant Death Syndrome” otherwise known as SIDS.  When her second child, Harry, died a year later age two months the Home Office pathologist who carried out the post-mortem determined that both deaths were suspicious. Sally Clark was arrested and tried at Chester Crown Court.

During the trial, statistical evidences was given by one of the prosecution witnesses, Professor Sir Roy Meadow. Meadow was a highly respected expert in field of child abuse and the author of  leading textbook ‘The ABC of Child Abuse’. His evidence was outrageously flawed, through miscalculations Meadow concluded that neither child had dies of natural causes and in fact, the likely hood of this was actually 1 in 73 million. Suggesting to the jury that there was a 1 in 73 million chance that she was innocent, falling foul of ‘prosecutors fallacy’  (http://en.wikipedia.org/wiki/Prosecutor’s_fallacy). The medical evidence for this statistic provided by the Professor was extremely complicated. I believe the jury cannot be blamed for adopting the belief because surely we should have every faith in a man such as this.

How could he possibly have come to this conclusion ? Where are the mistakes? …..

one crucial piece of Meadows evidence was not of a medical nature at all, but instead was based on a report that he was involved with. At the time of the trial, the Professor was writing a preface to a report of a government funded research team, the ‘Confidential Enquiry into Sudden Death in Infancy’ or ‘CESDI’. The report covered an in depth study of over 400 sudden infant deaths in the UK over a 3 year period. Its aim was to establish possible risk factors for sudden or unexpected deaths in infants. These factors included the mothers ages, whether any parents were smokers and whether the household included a wage-earner. The Clark household had none of the forementioned risk factors. Therefore Meadow concluded from the CESDI study that the chance of SIDS in this case was 1 in 8,543 and is quoted as saying : ““you have to multiply 1 in 8,543 times by 1 in 8,543 and I think it gives that in the penultimate paragraph, its (sic) points out that it’s approximately a chance of 1 in 73 million”. Basically, he calculated each death independently of eachother using a similar method as we would for guessing the probability of rolling a 6 on a dice immediately after already rolling a 6. He even used a Grand National analogy that you can find here (as it’s rather lengthy, i can only assume this was in order to confuse the jury further) http://books.google.co.uk/books?id=Yz1NvkhJxq8C&pg=PA164&dq=#v=onepage&q&f=false (Page 164)

Some weeks later the British Medical Journal published an article suggesting that SIDS was not a random event and that double SIDS may even occur every 18 months, not every 100 years as suggested by Prof. Meadow. Surely, whether you are an expert or an outsider – would it not cross your mind that SIDS may be hereditary?

Unfortunately this is not the only case in which the misuse of statistics has led to very unjustified and saddening endings. And unsurprisingly  when Radio 5 live and The Observer paper put a number of questions to the Professor concerning Clark he declined to talk to them for danger of “being sticthed up” – to be quite honest, this really sickens me. We should be able to have faith in experts such as Meadow to provide us with truthful statistics, not those twisted to meet his needs. Especially in circumstances such as these where a mother lost her two boys and was convicted of their murder.

… and very sadly, it led to this: http://www.dailymail.co.uk/news/article-443123/Sally-Clark-drank-death.html

I found the story genuinely upsetting, so i hope i haven’t caused any distress to anyone whose chosen to read this. I hope that instead it encourages you to think more carefully about how you present your own research and think about how honest you’re being in disclosing them.

http://statstastic.wordpress.com/2012/03/07/is-it-possible-to-prove-a-research-hypothesis/#comment-63

http://nirapsy.wordpress.com/2012/03/11/we-are-the-same-just-different/#comment-43

http://thgoatse.wordpress.com/2012/02/05/how-do-you-know-whether-your-findings-are-valid/#comment-62

How believable is psychology research? The answer: It should be completely believable and trustworthy. But much like we ‘facebook stalk’ new people we’ve met in the pub that night, we must do our own research before installing our trust in anyone or anything. I admit, that sounded awfully pessimistic, i should have more faith in my fellow man. Unfortunately some stories suggest otherwise…

In November of last year two instances of academic fraud, bought into the public eye began to raise questions about the validity of research in this field. A particularly shocking story is of Harvard professor and “heavyweight” cognitive psychologist Marc D. Hauser. The professor resigned after being found responsible of scientific misconduct by the University. Alarm bells were raised when Hausers lab students alleged data fabrication. the ‘Harvard Crimson’ reported that investigations found examples of misconduct in 3 of Hauser’s published papers and 5 other examples in unpublished or corrected papers. For such a prestigious University, these findings are astounding.

In light of scandals such as this, it’s important that we bare a few things in mind when looking at new research findings:

– Too good to be true? It probably is ! The more extraordinary the findings, the more skeptical we should be. shocking claims must be accompanied by extensive amounts of evidence.

– Be wary of studies that heavily support an ‘opinion’ in terms of controversial research e.g. gender, ethnicity or cultures

.- Overlook any studies where the raw data is not available to other researchers. This would explain why a few top dogs have snook under the radar (for a certain amount of time).

– Be on the look out for researcher bias. What are the researchers interests? Are they likely to want to ‘prove’ something specifically?

Of course, there are many more things we much take into account. Some researchers may wish in this day and age to gain some sort of celebrity status from producing such obscene findings and often the media is more than willing to assist them in their quest.

The BBC have recently produced an article with some outstanding findings. e.g. slow walking predicts dementia, but much unlike many news reports the BBC as politely reminded us that “Further research is needed to understand why this is happening and whether preclinical disease could cause slow walking and decreased strength.” suggesting that this is not the be all and end all of this topic of research.

In short, we should be able to believe research findings. however, take psychology (particularly pop-psych) with a very large pinch of salt!

(http://www.thecrimson.com/article/2010/9/14/hauser-lab-research-professor/)

http://www.bbc.co.uk/news/health-17028712 – slow walking predicts dementia.

http://baw8.wordpress.com/2012/02/19/designs/#comment-52

https://dybanneediu.wordpress.com/2012/02/16/observational-research-method/#comment-40

http://psychab.wordpress.com/2012/02/18/type-one-and-type-two-errors/#comment-39

http://fr4nw.wordpress.com/2012/02/05/30/#comment-30

http://psucd8.wordpress.com/2012/02/17/is-it-fair-that-negative-results-are-not-fully-represented-in-published-literature/#comment-54

Of course, this is entirely unbiased (as a Psychologist, i know to make objective observations) but Psychologists are pretty awesome ! I think being able to say you can measure anything, even internal or covert behaviours is rather impressive and i defy anyone to ague against it. Now before you get ahead of yourself, i didn’t just necessarily suggest we could measure EVERYTHING. I said, it’s cool IF we can.

So, lets looky what we have here… in general terms Psychologists basically try to explain why we do all sorts of daft things and why we do the normal things we do, basically any behaviours. Many, can be directly observed and measured easily. However, there are also numberous behaviours that we cannot literally see and this of course makes measuring them rather difficult. Much like Freuds theory on dreams (which had very little substantial evidence) Psychologists use theories to determine why it is we engage in some behaviours that have no observable qualities. For example: how can we possibly measure self esteem, intelligence or motivation? There are a number of ways o get around this…

Lets first define a few terms:

– Hypothetical construct:  those behaviours that we cannot directly see and/or measure

-Operationalised varibles: when a researcher removes the ambiguity of a construct so that it can be measured and presented quantitatively/qualitatively

Although constructs like self esteem are entirely internal and we cannot see them, we can however observe and measure behaviors that come under the title of the construct using qualitative operationalised variables. So, we measure other varibles that represent the construct, kind of like it’s ‘sub categories’. For example the construct of intelligence, Lewis Terman revised the Benit Scale in 1916 and introduced the concept of the intelligence quotient (IQ). Though i cannot look at an individual and measure how intelligent they are purely via observation, i can provide them with an IQ test. This type of measurement will give me an idea of how well an individual can complete and understand tasks. Therefore this is an indirect method of obtaining a measurement of someones intelligence.

We must be wary though that the information we find is valid when we measure behaviours that are only representative of that construct – because we havent measured the construct itself directly. We also run the risk of finding a correlation rather than causation, meaning we cannot make conclusions about the relationships between the variables being observed.

We can also use operationalised varibles that give a quantative representations of a construct. Stress alone is a hypothetical construct that is difficult to measure. However we can measure blood pressure. The general rule is, the more stressed we are, the higher our blood pressure – this provides us with a numerical value that we could use for analysis.

WIth this information in mind, in theory this would suggest we can in fact measure everything…but i sharnt go as far as to say thats completely true. (just incase anyone started throwing blog comment abuse haha) as long as you can find operationalised varibles for a construct, you can measure it.

((This topic is very closely related to ‘Psychology as a Science’ debate and i found a really nice breakdown of this here if youre interested: http://www.simplypsychology.org/science-psychology.html))

 

http://re3ecca.wordpress.com/2012/02/05/the-ethical-implications-of-mind-reading/#comment-55

http://psuc97.wordpress.com/2012/02/05/how-do-you-know-if-your-findings-are-valid/#comment-23

http://ohhaiblog.wordpress.com/2012/02/05/another-day-another-blog/#comment-65

http://danshephard.wordpress.com/2012/02/05/experimental-design-repeated-measures-vs-independent-measures-or-is-there-another-option/#comment-18