Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Science Technology

MIT, Harvard Scientists Find AI Can Recognize Race From X-rays (bostonglobe.com) 291

A doctor can't tell if somebody is Black, Asian, or white, just by looking at their X-rays. But a computer can, according to a surprising new paper by an international team of scientists, including researchers at the Massachusetts Institute of Technology and Harvard Medical School. From a report: The study found that an artificial intelligence program trained to read X-rays and CT scans could predict a person's race with 90 percent accuracy. But the scientists who conducted the study say they have no idea how the computer figures it out. "When my graduate students showed me some of the results that were in this paper, I actually thought it must be a mistake," said Marzyeh Ghassemi, an MIT assistant professor of electrical engineering and computer science, and coauthor of the paper, which was published Wednesday in the medical journal The Lancet Digital Health. "I honestly thought my students were crazy when they told me."

At a time when AI software is increasingly used to help doctors make diagnostic decisions, the research raises the unsettling prospect that AI-based diagnostic systems could unintentionally generate racially biased results. For example, an AI (with access to X-rays) could automatically recommend a particular course of treatment for all Black patients, whether or not it's best for a specific person. Meanwhile, the patient's human physician wouldn't know that the AI based its diagnosis on racial data.

This discussion has been archived. No new comments can be posted.

MIT, Harvard Scientists Find AI Can Recognize Race From X-rays

Comments Filter:
  • not intelligent (Score:3, Insightful)

    by awwshit ( 6214476 ) on Monday May 16, 2022 @03:11PM (#62540078)

    > the scientists who conducted the study say they have no idea how the computer figures it out

    That is a real problem. For the most part, computers are fairly well understood, and understandable. This thing we call AI isn't really intelligent at all and we don't understand it already. AI still has all of the garbage-in/garbage-out problems of any program, we just don't know how to fix AI when it does something dumb. tay.ai was just shutdown for a reason. We've actually learned nothing here.

    • by account_deleted ( 4530225 ) on Monday May 16, 2022 @03:20PM (#62540116)
      Comment removed based on user account deletion
      • by Nkwe ( 604125 )
        And appropriately, the Slashdot quote of the day happens to be "Don't panic".
    • Re:not intelligent (Score:5, Insightful)

      by Anonymous Coward on Monday May 16, 2022 @03:37PM (#62540196)

      We've actually learned nothing here.

      Wrong. We've learned that there are categorical differences, visible in X-ray images, that indicate different races. Just because you can't explain (or just don't like the ramifications) is irrelevant.

      • Re: (Score:2, Insightful)

        by tsm_sf ( 545316 )

        Part of the problem is that the word race means one thing to scientists and another thing to evil republicans. This is hashed out pretty well in high school and 100-level college classes but there are a lot of dumb people out there.

        Same deal with virtue signaling. A term of art in the sciences that's almost universally misunderstood among dumbfucks.

        • by ereshkigal20 ( 6200780 ) on Monday May 16, 2022 @10:01PM (#62541222) Homepage
          You can tell sex from skeleton measurements. It has long been known that some skull shapes are broadly a bit different between Negroid and Caucasoid (and other) groups. There are more bits, some of which may be seen in sports medicine. Why is anyone scandalized? And why would anyone be so ideologically blinded not to expect such differences? They may not wirk for every individual but the measurements show that the groupings are based on reality. Get used to it.
      • Re:not intelligent (Score:4, Interesting)

        by awwshit ( 6214476 ) on Monday May 16, 2022 @04:18PM (#62540380)

        When we call it AI, and we say 'it sees things that experts do not see', we are conveying trust and accuracy. Is 90% accurate good enough? How off the charts bad are the other 10%? The article only leaves us with questions about how useful or actually evil this could be. Again, we learned nothing.

        Medicine should be personalized, not racialized. If you are not well we should consider your personal medical history and your personal condition - statistics on other people in your same race won't make you any more or less healthy (that part is built into your personal medical history after all). Does your doctor think you are lying about race such that AI verification is required? And how would it help your doctor to confirm your race?

        Building a big version of 'Not Hot Dog' does not build a medical tool.

      • Re: (Score:2, Interesting)

        What we've probably learned is that blacks and whites tend to go to different hospitals with different x-ray facilities, and the AI can tell what x-ray facility is used and guesses from that the race of the patient.

      • Many years ago I was watching a forensic show, and the scientist said that they knew this person (bones they found) was a black male approximately age 20, and explained how you could tell by the leg bones. So, I assume this is probably more like something we just don't hear about, not some amazing new discovery.
    • by ceoyoyo ( 59147 )

      It's not really a problem. Once a physician becomes skilled we don't really know how they make most decisions either. The difference is that the AI's performance is tracked very carefully.

      • Understanding the human brain is not a prerequisite for understanding current AI technology, AI programs, or AI output.

        There is a huge gap in capability from a program that is good a sorting pictures to a 'skilled physician' and AI isn't even close. AI does not deserve the benefit of the doubt, quite the opposite - extraordinary claims need extraordinary proof.

        • by ceoyoyo ( 59147 )

          I'm not really sure what you're trying to say. It doesn't seem to have much relevance to my post though.

          • The training of an AI is nothing like the training of a physician. While a physician can earn the benefit of the doubt from other humans, the AI has not earned and does not deserve the benefit of the doubt.

            There is no reason to trust the accuracy of the result in this study, especially when the authors claim to not know how it works. We do not have to know how humans work to say that we should demand to know how computers work. Your comparison between AI and human doctors does not make sense or even apply

            • by ceoyoyo ( 59147 )

              The training of an AI is nothing like the training of a physician.

              Stop you right there. For many tasks it's very similar. You give them a bunch of cases, get them to predict what they are, tell them the right answer, and repeat as their predictions improve. This can be formally sitting them down with a stack of charts/films/whatever, or an on the job "here, what do you think of this case?"

              Physicians even have these things called "rounds" where they get together, show each other their weird cases, get everyo

              • Tell me, when do the AI's get together and do rounds?

                Seeing a series of positive and negative pictures, for some definition of seeing, is nothing like medical school, or like medical training, or like medical practice.

    • That "we ave no idea" is almost certainly BS too. I expect given some opportunity for statistical analysis, a "doctor" could recognize race with 90% accuracy within a pre-defined sample too. (The quotes are because bone work like like this is not medicine, it's physical anthropology.) One problem is the success mentioned is a comparison of apples and oranges. The statement "A doctor cannot tell ..." is really "A physical anthropologist cannot tell with 100% accuracy within samples controlled for other k

      • I guess the question is, would you drive a car that only worked 90% of the time :)

        It always seemed to me the false narrative was "those bad classical liberals, republicans, and color-blind racists are trying to claim race doesn't exist" :). I suppose it all depends on which narrative bubble you spend more time in :)

    • by tlhIngan ( 30335 )

      That is a real problem. For the most part, computers are fairly well understood, and understandable. This thing we call AI isn't really intelligent at all and we don't understand it already. AI still has all of the garbage-in/garbage-out problems of any program, we just don't know how to fix AI when it does something dumb. tay.ai was just shutdown for a reason. We've actually learned nothing here.

      We understand AI fairly well.

      What we call AI, is really better known as a pattern recognition engine. Computers

      • > The problem is, we don't know what the pattern is or what to look for. And that's the problem.

        Right, so we cannot be so quick to determine that the training data is not biased.

      • Computers have traditionally been poor at recognizing patterns, but humans are great at it. However, using neural networks, we can train networks to recognize patterns and they're detecting patterns even beyond what we're seeing.

        You don't even have to go for neural networks. "Traditionally", computers had exceeded human abilities, too. Take for example EQP, which exceeded the pattern recognition abilities of decades of mathematicians when it found a proof to the Robbins conjecture -- without the help of any neural network.

    • There are neural-net engines that can tell which sections of an image contribute most to a result. Maybe the original engine they used can't, but if they re-run the training thru the type I mentioned it may give stronger clues.

  • Why is this bad? (Score:5, Insightful)

    by FuegoFuerte ( 247200 ) on Monday May 16, 2022 @03:18PM (#62540110)

    If the computer identifies someone as most likely African American, say, based on x-rays, and then raises the probability that certain symptoms may be the result of sickle cell anemia (which is primarily found in people with African heritage), that seems like a reasonable direction to go, and the same type of thought process a doctor might follow.

    Similarly, if the computer sees someone most likely has pasty white north european ancestry, and lives in Southern California, and recommends checking for skin cancer, that's not unreasonable either.

    In other words... not everything that takes ethnicity/ancestry into account is inherently bad. Not all bias is bad bias. A Ukrainian citizen is probably biased against uniformed Russian soldiers, and toward uniformed Ukrainian soldiers, because that person makes the (correct) assumption that the Russian is likely to torture and/or kill them, while the Ukrainian is likely there to help them. That bias is reasonable and good. If, as a doctor, I'm biased to look harder for a certain disease in an ethnicity with a higher incidence of that disease, that's also reasonable. A man would be rightly upset to go to a doctor and be screened for ovarian cancer, and a woman would be rightly upset to go to a doctor and be given a prostate exam. The doctor is rightly biased against checking a woman for prostate issues, and a man for ovary issues, and this is not a bad thing.

    • by suutar ( 1860506 ) on Monday May 16, 2022 @03:29PM (#62540154)

      What's bad is if we don't know what parts of the xray it's looking at to get this answer, we can't improve it, and if it breaks, we can't fix it.

    • by mhkohne ( 3854 )

      Because if you don't know how it works then it may be working only because something else racist in the system by which the XRays are taken is leaving a marker (like they place black people differently on the table for some dumbass reason).

  • by gurps_npc ( 621217 ) on Monday May 16, 2022 @03:25PM (#62540138) Homepage

    Race isn't a real thing. There is no single (or group of less than 10) gene that identifies it. It is a social construct made up of a combination of skin tone, hair and facial features.

    It's why so often people cannot tell the difference between hispanic, First Nations, arabic or cetain asian populations. Despite the fact that all of these groups independently evolved thousands of miles away from each other.

    These features that we use to identify race are all superficial, so when you remove those superficial cues, we do not see anything.

    However they did evolve separately and created other superficial cues that in bone structure that we never notice. So an AI can easily be trained to find those minor bone structure difference.

    But those structures are not present in all, so a solid 90% value seems about right to me.

    I bet an AI could do pretty similar with names as well. Not that many Chads from Chad. Or Georgias from Georgia (the country).

    • It's why so often people cannot tell the difference between hispanic, First Nations, arabic or cetain asian populations.

      I'll bite - why is First Nations capitalized, but hispanic, arabic and asian NOT capitalized?

    • How do they handle all of us mix-breeds?

      Do filipinos count as Asian, or White (from the Spaniards)? Do Spanish count as white, or black (from the Moors)? Do Mongolians count as Asian, or White? And where the hell does Tiger Woods sit?

      90% matching to some messy, arbitrary, unscientific category is like having a computer algorithm that can tell with 90% certainty if a cat is cute, ugly, or satanic.

      • Ah, "self-reported race". There's your confound right there.

        In this study, we investigated a large number of publicly and privately available large-scale medical imaging datasets and found that self-reported race is accurately predictable by AI models trained with medical image pixel data alone as model inputs.

        • You have it backwards. If the labels were assigned arbitrarily, they would be impossible to predict.
          • Arbitrary doesn't mean random or stochastic :)

            You can arbitrarily draw county lines, and still be able to predict income based on what county someone lives in.

            When you train a model to find something, it doesn't mean that the something is real - only that you can create a pattern recognition machine. Looking for shapes in clouds doesn't make the shapes real :)

      • 90% matching to some messy, arbitrary, unscientific category is like having a computer algorithm that can tell with 90% certainty if a cat is cute, ugly, or satanic.

        Having spent time with more cats than is reasonable, both outdoors and indoors, I would say 90% matching would be to answer cute and satanic for all cats. Very few are legitimately ugly. And all of them are satanic. They can't help it. It's just what they are.

      • Re: (Score:2, Interesting)

        Comment removed based on user account deletion
        • by AmiMoJo ( 196126 )

          How on Earth can "Dutch" be a race, or even a generic traits? The Netherlands hasn't existed long enough to have any genetic influence, and how can it be significantly different to "Celtic" when most of the people living there since the start of recorded history are descended from the Celtic tribes?

          Sounds like 23 and Me are just making this shit up.

          • My guess is that they're just feeding a model :)

            Take a couple hundred Dutch families that have lived there for 500 years (proclaimed in 1581 or so), and use them as your "Dutch" cloud.

            Take a couple hundred UK families that have lived there for 2000 years, and use them as your "Celtic" cloud.

            Whatever new sample they get, they just do some fuzzy pattern matching to see how similar it is to each particular "cloud", then they divvy up 100% of a person into fractions based on whatever proxy they use for genetic

        • In your case, they'd take the details of 25%, 62.5%, 12.5%, and boil it all down to "it matches self-reported whiteness".

          What the AI can determine is "is self-reported classification into 3 arbitrary categories observable in various xrays". They could probably train an AI to recognize xrays from self-reported democrats, republicans, and independents.

          One day, under our tech overlords, we may actually have a universal human DNA database, and be able to calculate with precision ancestry and inter-relatedness

        • I wonder how many people have put dog drool into one of those 23 and me test tubes.
    • Comment removed based on user account deletion
      • by ceoyoyo ( 59147 )

        There are a range of skin colours. Europeans swap people in and out of the white category depending on how fashionable things like tea and Mediterranean vacations are, while Americans do the same depending on how afraid of Mexicans they are at any given time.

        • Comment removed based on user account deletion
          • by ceoyoyo ( 59147 )

            No, it's because skin colour is the result of skin melanin concentration, and skin melanin concentration is a characteristic that is subject to natural selection. There has been a more or less continuous range of melanin concentration, and thus skin colour, as long as there have been humans living in varied environments.

            • Heck, it's even subject to unnatural selection. We like to think that Lamarck was wrong, but if a woman who sits in a tanning bed every day while she's pregnant could definitely be altering the interuterine environment, producing traits in her offspring that would be different if she had a different prenatal condition. This is pretty obvious with diabetes (getting worse over generations, particularly if the mother has high blood sugar during pregnancy), but I'm sure it applies elsewhere.

              Skin color is also

    • Heritable traits (Score:5, Insightful)

      by Okian Warrior ( 537106 ) on Monday May 16, 2022 @03:56PM (#62540280) Homepage Journal

      Race isn't a real thing. There is no single (or group of less than 10) gene that identifies it. It is a social construct made up of a combination of skin tone, hair and facial features.

      The differences in skin tone, facial features, and such are due to genetic differences.

      Unless you're proposing that these conditions are not heritable. Just to be clear, these differences are heritable - yes?

      Whoever taught you that the concept of race depends on it being a single gene, or has to be less than 10 genes, is wrong.

      • by vux984 ( 928602 )

        I'm not the poster you replied to, but the question is whether any of that constitutes a 'race', and what is the definition of race?

        Are blue point Siamese cats a separate race from seal point Siamese cats? Nobody disputes that the difference between them is genetic, or that the genes are inherited. But that doesn't make them separate "races" of Siamese cat. Or if it does... why?

        The argument that 'race' doesn't exist isn't a refutation that skin color, or mouth shape, or hair color is controlled by genes , o

      • Whoever taught you that the concept of race depends on it being a single gene, or has to be less than 10 genes, is wrong.

        omfgwtfbbq there is no scientific basis behind race [sagepub.com]. Race was invented to justify slavery and other forms of oppression [pbs.org] against PoC. If you believe in race you're a racist, and ignorant too.

    • by ceoyoyo ( 59147 ) on Monday May 16, 2022 @04:11PM (#62540350)

      Anything that involves more than one gene is a social construct? That's an interesting point of view....

    • Consider the following:

      In yet other cases, some AIs were found to be picking up on the text font that certain hospitals used to label the scans. As a result, fonts from hospitals with more serious caseloads became predictors of covid risk.

      (Source [technologyreview.com])

      Depending on the quality of how the tool is built, it may actually just be detecting that the medical images are from a hospital in a black or Asian neighbourhood. Publish-or-perish has pushed a lot of people who don't understand (and/or have time to perform) basic

      • by timeOday ( 582209 ) on Monday May 16, 2022 @06:34PM (#62540820)
        No, the reason their result is interesting is because they addressed all(?) the leading hypotheses on why such results aren't "real" (in the sense of reflecting heritable physical traits).

        Now, I'm not going to go out on a limb and assume they got them all. But any attempt to explain away this result needs to explain not only why their model might have leveraged some sampling error - but how that error could be large enough to account for 90% classification accuracy.

    • by shanen ( 462549 ) on Monday May 16, 2022 @05:11PM (#62540562) Homepage Journal

      Mod parent up? In spite of the lack of "genetic" and the vacuous Subject? I only found it from a reply in the thread, and I was looking for the word, not the moderation.

      I think you have partly encapsulated the problem, but I think the historic factors are crucial to understanding the label "race". In short, it used to be convenient and sometimes even crucial to distinguish between friends and enemies. Especially during wars and invasions. The racial tags were "useful" in those contexts.

      In genetic terms, everything overlaps. There are no racial differences between populations of homo sapiens that are larger than individual differences within the populations.

      Having said that, the genes are important and should be considered for many medical situations. But that should be based on the actual genes that we can actually determine now, not secondary attribute or even patterns of attributes that are frequently correlated with those genes. And the racial tags from historical accidents don't need to be used.

      Funny joke time? In "natural" terms, to achieve equilibrium with random mixing of the genes, the way it should work is different. Every couple should have four kids so the two "genetically inferior" kids can die before reproducing. That's how to keep the situation stable, and Ma Nature loves stability. If you want actual evolutionary progress, then each couple needs to average more than four kids. (But defining genetic merit is an entirely different can of worms. I would argue that every person should have the right to reproduce, but should that include a right to have "genetically better" children out of all the "possible" children?)

      • There's no such thing as "genetically inferior" and "genetically superior" by our standards. Natural selection only cares about who survives, not how they survive.

        You could survive by luck, or maybe by strength, or maybe by brains, or maybe even by *lack* of brains. But if you reproduce, as far as natural selection is concerned, you're a winner in your round. Bonus points if you help your offspring pull off the same trick with the time you have left.

        There may come a time when the only people who survive

  • by mhkohne ( 3854 ) on Monday May 16, 2022 @03:29PM (#62540158) Homepage

    That means you don't actually have a result. You've got the freaking model right there - figure out how to tear it apart and get it's reasoning.
    And honestly if you can't do that, then you probably shouldn't be publishing results (yes, I know that's quite difficult. And I'm saying that if you don't know how it works, then you damn well shouldn't be using it anywhere near medicine. Way too much of modern medicine operates in exactly this way because the human body is complex to the point of insanity. And you shouldn't be adding to the problem.)

    • by pieterbos ( 2218218 ) on Monday May 16, 2022 @05:40PM (#62540644)

      We do not know how the brain of a doctor or surgeon works. And yet we trust their decisions and actions.
      Besides, finding something that works without knowing how it works is a valid result in itself. The result, if the authors did not make a mistake, shows that somehow information is present in x-rays that we did not know was there. Plus a method to retrieve some of that information. It might inspire others to find out why this works, or perhaps to discover something else about x-rays of humans that we do not yet know.
      The authors do provide a possible mechanism that could explain why their method works, as can be read in the linked news article. That is unrelated to skeletal structure.

      Also no one is suggesting that this has any direct practical purpose in medicine of any kind whatsoever. Scientific results can be useful and valid without any kind of practical application.

    • by AmiMoJo ( 196126 )

      That's not how this type of AI works. It uses a neural network that can't be understood in any meaningful way, any more than you can understand how a biological brain works by dissecting it.

      Understanding it is not important. What matters is being able to detect these failure modes and find ways to prevent them from happening.

      Unfortunately companies are racing ahead to sell AI systems without bothering to do this kind of testing first.

    • That means you don't actually have a result. You've got the freaking model right there - figure out how to tear it apart and get it's reasoning. And honestly if you can't do that, then you probably shouldn't be publishing results...

      Just knowing it's computable is an interesting result, even if you can't exactly explain how it works. The next step might be to dig in to what features the model wound up using to make it's determination.

  • by taylorius ( 221419 ) on Monday May 16, 2022 @03:30PM (#62540162) Homepage

    Can a doctor REALLY not tell? Or do they just claim vehemently not to be able to, whilst waving a rainbow flag, wearing their pronoun badge, and hoping not to get sacked.

    It seems fairly obvious to me people hailing from different parts of the world have noticeable physical differences. That's no reason to treat anyone differently of course, but it seems rather dumb to pretend it doesn't exist, and then act all shocked when an AI points out the obvious.

    • by mark-t ( 151149 )

      Can a doctor REALLY not tell? Or do they just claim vehemently not to be able to, whilst waving a rainbow flag, wearing their pronoun badge, and hoping not to get sacked.

      This is what I was thinking as well. I'm pretty sure that a suitably trained doctor who knows what to look for can identify distinguishing traits of skulls of particular races. It probably gets harder for skulls of certain mixed race people, and this is where computers can probably beat human doctors.

      But this is not inherently a bad thi

    • by notsouseful ( 6407080 ) on Monday May 16, 2022 @04:51PM (#62540490)
      The less flamebait-y question is whether a forensic anthropologist could tell the difference. Ever watch the tv show Bones? When all that's left are basically bones and you need to try to identify a corpse, you may end up with them. This is probably not interesting to a general radiologist or general practitioner as they're focused on understanding functional problems that the xrays can enhance, rather than general identification and personal traits of the person around the bones.
    • It seems fairly obvious to me people hailing from different parts of the world have noticeable physical differences.

      This happens among white people. There is obviously a great deal of difference between a blond Nordic type, and a darker Mediterranean type. Just within England, some people can discern the difference between a native of Birmingham, and a native of Manchester, just by the shape of their face. This is just tribalism, and it is built in to human nature. People seek out differences, in appearance, accent, manners, or whatever. The differences are often extremely small, and usually totally irrelevant in determi

  • Comment removed based on user account deletion
  • by ka98 ( 8047016 ) on Monday May 16, 2022 @03:33PM (#62540174)
    It could be as simple as all clinics serving blacks use one type of machine which produces unique imaging artifacts.
    • Or all of the data from one race was from overweight people, or any number of small biases in the training data that skew the results.

      • Nope

        The ability of deep learning models that were trained on the CXP dataset to predict patient race from the body-mass index (BMI) alone was much lower than the image-based chest x-ray models (area under the receiver operating characteristics curve [AUC] 0Â55), indicating that race detection is not due to obvious anatomic and phenotypic confounder variables. Similar results were observed across stratified BMI groups (0 92-0 99; appendix p 24).

    • Or socioeconomic status, which will leave development traces in bone structure when pronounced enough.

      Haven't read the article - I hope they thought of that and compensated for it.

    • It couldn't, because they specifically disproved that hypothesis.

      we developed models for the detection of racial identity on three large chest x-ray datasets-MIMIC-CXR (MXR),25 CheXpert (CXP),26 and Emory-chest x-ray (EMX) with both internal validation (ie, testing the model on an unseen subset of the dataset used to train the model) and external validation (ie, testing the model on a completely different dataset than the one used to train the model) to establish baseline performance

      ...

      We also investig

    • By that logic, 90% of blacks getting xrays would have to be going to those "special, mono ethnic clinics" which happen to be using these unique devices nobody else uses.

      How likely do you think that is?

  • Overall, I'm not sure anyone should just trust AI in the medical area without the doctors there to provide review and oversight.
  • Just ask the AI to log how it's determining X, and you'll know if it's methodology was valid, innovative, or just quackery.
  • Have been known for at least a century. At least people in the shoe business know it.

  • I'm sure they know exactly how, they just can't openly state it.

  • Race is a social construct, so is gender

  • MY AI can correctly predict pronouns from MRT with 99.99% accuracy!

  • "...an AI (with access to X-rays) could automatically recommend a particular course of treatment for all Black patients, whether or not it's best for a specific person.

    Are we at that stage? Seems like the article switches gears, and moves from AI diagnosing conditions, to complaining about possible recommended treatments - and that's a very, very different problem set.

    "Meanwhile, the patient's human physician wouldn't know that the AI based its diagnosis on racial data."

    And is that bad? It doesn't "base i

  • Nothing new here (Score:5, Interesting)

    by Duh-People-Really ( 6172216 ) on Monday May 16, 2022 @06:23PM (#62540774)

    None of these people have ever sat down and talked with a Physical Anthropologist. When I was an undergrad, I wrote a paper on the "Giles and Eliott Discriminant Function Analysis" apps.dtic.mil/dtic/tr/fulltext/u2/a065448.pdf to determine the racial distribution of the Universities skeletal collection.
    Small sample and I honestly knew the race of all the skulls measured/tested.
    It worked, meaning I did not have to fudge the numbers to make it work. The main point of my paper was to computerize the input of data and see if the technique worked as well as expected.

    I used the analysis to gain experience in writing code (FORTRAN IV) and gain experience in measuring the skulls. The paper did help me earn my BA in Anthropology and assisted in my efforts to demonstrate that computers were not a thing to be avoided at all costs.

    Oh - this was in 1976. I bought a TRS-80 in 1977 and was able to rewrite the program from FORTRAN IV to BASIC. Later (three or four years) in graduate school, I suggested that Microcomputers would be invaluable to Anthropologists/Archaeologist. Only the Anthro students and a few professors laughed me out of the lab one day. One remark was "How are you going to keep the dust out of the floppy drive?"

    Some - or in the case of the jokers in this post - just have no idea of the analytical methods used outside of their small, narrow, tunnel vision experience. When I first read this title, my first thought was --- FUCKING DUH. Physical Anthropologists have been sexing/racing skeletons for the last 100 years.

  • One of the first things I always think when "AI does this or does that" is that it comes down to sample bias... Like when AI filtered out resumes from women and people with black sounding names, because most tech companies only hire white men... The "AI" filter was based on existing employees that was deemed "successful" - hence... sample bias... In this case... perhaps hospitals in predominately black areas use different (cheaper?) x-ray machines/film and so people shot on this type of machine/film is mor
  • My understanding is there's a tremendous amount of diversity in people of African decent. Same for Asian, given how long people have lived in Asia. Personally, I can't pin most people down more closely than Asia or Africa, while someone who grew up in east Asia can tell Chinese from Korean from Japanese. It wouldn't surprise me if someone who grew up in Africa finds all the various ethnic groups quite distinguishable.

    Anyway, I wonder if their classifications were specific enough to be making meaningful deci

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (10) Sorry, but that's too useful.

Working...