• AnarchoEngineer@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    The sigma 8 was for our confidence interval of the point estimate, not a score that would give you sigma 8 results.

    Furthermore as your picture points out, that test was defective because it gave a standard deviation higher than it said. The modern ones normalize the distribution to avoid that problem.

    There is no theoretical limit to IQ. If you gave an arbitrarily long enough test it would be possible (though incredibly unlikely) that you could get IQ values in the thousands.

    I think you are getting confused by what the meaning of the score is supposed to be. It has nothing to do with the number of people in existence.

    The test is assuming that humans have an IQ that follows a normal distribution. They then sample humans and normalize the scores of each sample population. This is not a “you are smarter than x people” test. It assumes that IQ is an inherent property of humanity and gives you a probability of people (any amount of them) having a lower score than you.

    Sure, at a certain point that basically means you’re likely to be smarter than everyone alive currently. And yeah if we get multiple scores like that it means it is likely (though not guaranteed) that our metrics are not effective.

    As a counter example of why multiple crazy high scores don’t necessarily mean the scales are broken here’s a thought experiment:

    Imagine you did this test over a million years or just that you actually sampled an absurd number of people like 4quadrillion. There are going to be people who ranked the highest on that set. So the chance of a person being in the top 4 is about 1 in a quadrillion.

    Now if we make the assumption that IQ is unaffected by time, it is entirely possible that two of those people might be alive at the same time or even all 4 of them.

    These people would have IQ scores placing them wayyy above the population of their current time period, but that wouldn’t change the fact those scores are still in fact accurate.

    The scores have nothing to do with the current living population of humanity; those scores are supposed to be relative to general human intelligence regardless of time or place. Ergo, if we assume intelligence is not limited and that humanity survives indefinitely (and that IQ tests actually mean something) then there is a nonzero chance of getting any arbitrary score in the natural numbers. 400, 8000, 10^23, who cares.

    As long as you can write tests long enough and you keep testing humans long enough you’ll eventually find someone who scores at those levels without your test being defective. That’s how probability works.

    If you still don’t get what I’m saying or what a normal distribution is, I suggest you go to YouTube or peertube to look it up. Chances are they’ll be able to explain it better than me lol

    • FauxPseudo @lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      There is a limit. A limit based on population. Because, again, it’s not a score. It’s a quotient.

      (Mental Age / Chronological Age) X 100.

      Chronological age is the average score of others your age. If you aren’t comparing to the population then you aren’t calculating an IQ.

      • AnarchoEngineer@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Originally, IQ was a score obtained by dividing a person’s estimated mental age, obtained by administering an intelligence test, by the person’s chronological age. The resulting fraction (quotient) was multiplied by 100 to obtain the IQ score.

        “Originally” because that’s not the case for modern IQ tests because now we fit the data to a normal distribution, giving us a much more reliable and repeatable experiment.

        Furthermore, even if that was quotient formula was still used, the average score of others your age is still a population parameter (something you cannot measure the true value of) that you can only sample and estimate for the possibly indefinite population. Your confidence in your estimate of the average depends on the number of samples; the actual parameter does not because it is (supposedly) an inherent quality of the class of things you’re sampling.

        Please just go through a statistics crash course I don’t know how to explain this better.