Tuesday, August 30, 2005

Measurement issues in sex differences in intelligence

There's been a lot of fuss recently over a paper, entitle 'Sex differences in means and variability on the progressive matrices in university students: A meta-analysis', published in the British Journal of Psychology, by Paul Irwing and Richard Lynn. The Guardian and The Daily Mail, (The Great Intellectual Divide) amongst many others have reported this. The journalists report that (basically) the study found that men are more intelligent than women.

However we need to think about this a little more critically, as there are a number of hidden assumptions that are relevant to the interpretation of this study, and they are not made explicit.
The first assumption is that Raven's progressive matrices (RPM) measures IQ. I think that it does - the RPM has one of the highest g-loadings of any tests. If you want a test of IQ, you cannot go far wrong with the RPM.

Of course it cannot be a perfect measure of IQ - there will always be measurement error, but what we need to know is whether this measurement error is random, or is it correlated with sex? That is, does the Raven's progressive matrices test underestimate the IQ of men, or overestimate the IQ of women?

It is quite possible for the correlation to be high (between IQ and a test) but the errors to be non-random. For example it is well known that men are more competitive than women. The RPM is an untimed test - that is, you can spend as long as you would like to on the test. The papers that Irwing and Lynn used did not determine the amount of time taken by people on the test, and did not take this into account. Men are usually more competitive than women, and if people treat the test as some form of competition (and I usually do) the more competitive people might try harder, and take longer. The graph shown below shows this possible effect.



If we used a different test, say one which was time constrained, we might find those lines moved closer together.

The second assumption is that IQ (which is a thing that is measured by IQ tests) is the same thing as intelligence (which is the thing that we have got more or less of) - again, there is a relationship there, and I would suspect it's a strong one, but I don't think it's perfect.

The problem is that the concepts of IQ and intelligence are difficult to define, without using operational constructs, like the actual tests used, and if we start to talk about the actual tests used, we don't know what the relationship is between the test and the construct that we are talking about. It would not be difficult to construct an "IQ" test that men did better on (Hey! We've got one - the Raven's Progressive Matrices!), but would it be possible to construct a test that women did better on? I suspect it would. Would it be possible to determine which one of thoses tests is a better (as in less biased) measure of IQ or intelligence? I suspect it wouldn't.

Friday, August 26, 2005

Power Analysis updates

I've tweaked the power analysis page. Updating the reading, and adding a little more on the different meanings of post hoc power.

Thursday, August 25, 2005

Meta-Publication Bias?

A problem with evaluating anything in research is publication bias. That is that research which has a statistically significant result may be more likely to be published, and more likely to be published in more widely read journals. Sometimes this is called the file drawer problem - if you want to know about all the research that has been published in an area, you don't just need to find and read all the journal articles, you need to read all the articles that didn't get published and sit in file drawers.
Research has been carried out on publication bias, to see if this effect has been occurring. However, the British Medical Journal has an article this week on publication bias in studies of publication bias.
The researchers looked to see if studies of publication bias that found publication bias were more likely to be published - a sort of meta-publication bias. They did not find any such bias, but they said that the power to detect such a bias was small, because of the small numbers of studies.

Tuesday, August 23, 2005

Median is the Message

Today's Guardian has an article about the rise in the average cost of a wedding. What the authors of this kind of article never seem to remember (or know in the first place) is that the distribution of this sort of thing is highly positively skewed.
If 999 people spend £1002 pounds on their wedding, and Posh and Becks (say) spend £1,000,000, then the average wedding costs £2000. Of course, no one has spent £2000 on their wedding, and the vast majority have spent much less.

Similarly, they say the "average" guest spends £300, but I've never spent anything approaching that on a wedding - the median should be the message, because the median represents the amount of money that half of people spend above, and half spend below.