{"id":8001,"date":"2025-05-13T10:14:04","date_gmt":"2025-05-13T10:14:04","guid":{"rendered":"https:\/\/mpelembe.net\/?p=8001"},"modified":"2025-05-13T10:15:52","modified_gmt":"2025-05-13T10:15:52","slug":"ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters","status":"publish","type":"post","link":"https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/","title":{"rendered":"AI can guess racial categories from heart scans \u2013 what it means and why it\u00a0matters"},"content":{"rendered":"<p>  <span><a href=\"https:\/\/theconversation.com\/profiles\/tiarna-lee-2368755\">Tiarna Lee<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/kings-college-london-1196\">King&#8217;s College London<\/a><\/em><\/span><\/p>\n<p>Imagine an AI model that can use a heart scan to guess what racial category you\u2019re likely to be put in \u2013 even when it hasn\u2019t been told what race is, or what to look for. It sounds like science fiction, but it\u2019s real.<\/p>\n<p><!--more--><\/p>\n<p><a href=\"https:\/\/academic.oup.com\/ehjdh\/advance-article\/doi\/10.1093\/ehjdh\/ztaf008\/8038011?login=true\">My recent study<\/a>, which I conducted with colleagues, found that an AI model could guess whether a patient identified as Black or white from heart images with up to 96% accuracy \u2013 despite no explicit information about racial categories being given. <\/p>\n<p>It\u2019s a striking finding that challenges assumptions about the objectivity of AI and highlights a deeper issue: AI systems don\u2019t just reflect the world \u2013 they <a href=\"https:\/\/www.ucl.ac.uk\/news\/2024\/dec\/bias-ai-amplifies-our-own-biases\">absorb and reproduce the biases<\/a> built into it.<\/p>\n<hr>\n<p><em><strong>Get your news from actual experts, straight to your inbox.<\/strong> <a href=\"https:\/\/theconversation.com\/uk\/newsletters?promoted=the-daily-2\">Sign up to our daily newsletter<\/a> to receive all The Conversation UK\u2019s latest coverage of news and research, from politics and business to the arts and sciences. <a href=\"https:\/\/theconversation.com\/uk\/newsletters?promoted=the-daily-2\">Join The Conversation for free today<\/a>.<\/em><\/p>\n<hr>\n<p>First, it\u2019s important to be clear: <a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC7682789\/\">race is not a biological category<\/a>. Modern genetics <a href=\"https:\/\/academic.oup.com\/emph\/article\/9\/1\/232\/6299389\">shows there is more variation<\/a> within supposed racial groups than between them. <\/p>\n<p><a href=\"https:\/\/www.genome.gov\/genetics-glossary\/Race\">Race is a social construct<\/a>, a set of categories invented by societies to classify people based on perceived physical traits and ancestry. <a href=\"https:\/\/www.bbc.co.uk\/future\/article\/20250417-biological-reality-what-genetics-has-taught-us-about-race\">These classifications don\u2019t map<\/a> cleanly onto biology, but they shape everything from lived experience to access to care.<\/p>\n<p>Despite this, many AI systems are now learning to detect, and potentially act on, these social labels, because they are built using data shaped by a <a href=\"https:\/\/www.ohchr.org\/en\/stories\/2024\/07\/racism-and-ai-bias-past-leads-bias-future\">world that treats race as if it were biological fact<\/a>.<\/p>\n<figure>\n            <iframe width=\"440\" height=\"260\" data-src=\"https:\/\/www.youtube.com\/embed\/Cx284WjpEQY?wmode=transparent&amp;start=0\" frameborder=\"0\" allowfullscreen=\"\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" data-load-mode=\"1\"><\/iframe><\/p>\n<\/figure>\n<p>AI systems are already transforming healthcare. They can <a href=\"https:\/\/www.bbc.co.uk\/news\/articles\/ckdpg5p820xo\">analyse chest X-rays<\/a>, <a href=\"https:\/\/www.cedars-sinai.org\/newsroom\/is-artificial-intelligence-better-at-assessing-heart-health\/\">read heart scans<\/a> and flag potential issues faster than human doctors \u2013 <a href=\"https:\/\/www.ahajournals.org\/doi\/10.1161\/CIRCIMAGING.119.009214\">in some cases<\/a>, in seconds rather than minutes. <a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC11047988\/\">Hospitals are adopting these tools<\/a> to improve efficiency, reduce costs and standardise care.<\/p>\n<h2>Bias isn\u2019t a bug \u2013 it\u2019s built in<\/h2>\n<p>But no matter how sophisticated, <a href=\"https:\/\/www.unesco.org\/en\/artificial-intelligence\/recommendation-ethics\/cases\">AI systems are not neutral<\/a>. They are trained on real-world data \u2013 and that data <a href=\"https:\/\/unu.edu\/article\/never-assume-accuracy-artificial-intelligence-information-equals-truth\">reflects real-world inequalities<\/a>, <a href=\"https:\/\/www.gov.uk\/discrimination-your-rights\">including those based on<\/a> race, gender, age, and socioeconomic status. These systems can learn <a href=\"https:\/\/www.bbc.co.uk\/news\/health-66259618\">to treat patients differently<\/a> based on these characteristics, even when no one explicitly programs them to do so.<\/p>\n<p><a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC11542778\/\">One major source of bias<\/a> is imbalanced training data. If a model learns primarily from lighter skinned patients, for example, <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0169260724000403?via%3Dihub\">it may struggle<\/a> to detect conditions in people with darker skin.<br \/>\n<a href=\"https:\/\/www.science.org\/doi\/10.1126\/sciadv.abq6147\">Studies in dermatology<\/a> have already shown this problem.<\/p>\n<p>Even language models like ChatGPT aren\u2019t immune: <a href=\"https:\/\/doi.org\/10.1038\/s41746-023-00939-z\">one study found<\/a> evidence that some models still reproduce outdated and false medical beliefs, such as the myth that Black patients have thicker skin than white patients.<\/p>\n<p>Sometimes AI models appear accurate, but for the wrong reasons \u2013 a phenomenon called <a href=\"https:\/\/www.nature.com\/articles\/s42256-021-00338-7\">shortcut learning<\/a>. Instead of learning the complex features of a disease, a model might rely on irrelevant but easier to spot clues in the data.<\/p>\n<p>Imagine two hospital wards: one uses scanner A to treat severe COVID-19 patients, another uses scanner B for milder cases. The AI might learn to associate scanner A with severe illness \u2013 not because it understands the disease better, but because it\u2019s picking up on image artefacts specific to scanner A.<\/p>\n<p>Now imagine a seriously ill patient is scanned using scanner B. The model might mistakenly classify them as less sick \u2013 not due to a medical error, but because it learned the wrong shortcut.<\/p>\n<p>This same kind of flawed reasoning could apply to race. If there are differences in disease prevalence between racial groups, the AI could end up learning to identify race instead of the disease \u2013 with dangerous consequences.<\/p>\n<figure>\n            <iframe width=\"440\" height=\"260\" data-src=\"https:\/\/www.youtube.com\/embed\/BDDGhNHtr-c?wmode=transparent&amp;start=0\" frameborder=\"0\" allowfullscreen=\"\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" data-load-mode=\"1\"><\/iframe><\/p>\n<\/figure>\n<p>In the heart scan study, researchers found that the AI model wasn\u2019t actually focusing on the heart itself, where there were few visible differences linked to racial categories. Instead, it drew information from areas outside the heart, such as subcutaneous fat as well as image artefacts \u2013 unwanted distortions like motion blur, noise, or compression that can degrade image quality. These artefacts often come from the scanner and can influence how the AI interprets the scan. <\/p>\n<p>In this study, Black participants had a higher-than-average BMI, which could mean they had more subcutaneous fat, though this wasn\u2019t directly investigated. Some research has shown that Black individuals tend to have less visceral fat and <a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/full\/10.1038\/oby.2010.248\">smaller waist circumference<\/a> at a given BMI, but more <a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/12601630\/\">subcutaneous fat<\/a>. This suggests the AI may have been picking up on these indirect racial signals, rather than anything relevant to the heart itself.<\/p>\n<p>This matters because when AI models learn race \u2013 or rather, social patterns that reflect racial inequality \u2013 without understanding context, the risk is that they may reinforce or worsen existing disparities.<\/p>\n<p>This isn\u2019t just about fairness \u2013 it\u2019s about safety.<\/p>\n<h2>Solutions<\/h2>\n<p>But there are solutions:<\/p>\n<p><strong>Diversify training data<\/strong>: <a href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-031-23443-9_22\">studies have shown<\/a> that making <a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC8515002\/\">datasets more representative<\/a> improves AI performance across groups \u2013 without harming accuracy for anyone else.<\/p>\n<p><strong>Build transparency<\/strong>: many AI systems <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S0933365721001512?via%3Dihub\">are considered \u201cblack boxes\u201d<\/a> because we don\u2019t understand how they reach their conclusions. The heart scan study used heat maps to show which parts of an image influenced the AI\u2019s decision, creating a form of <a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC11391805\/\">explainable AI<\/a> that helps doctors and patients trust (or question) results \u2013 so we can catch when it\u2019s using inappropriate shortcuts.<\/p>\n<p><strong>Treat race carefully<\/strong>: researchers and developers must recognise that <a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC11626588\/\">race in data is a social signal<\/a>, not a biological truth. It requires thoughtful handling to avoid perpetuating harm.<\/p>\n<p>AI models are capable of spotting patterns that even the most trained human eyes might miss. That\u2019s what makes them so powerful \u2013 and potentially so dangerous. It learns from <a href=\"https:\/\/www.bloomberg.com\/graphics\/2023-generative-ai-bias\/\">the same flawed world we do<\/a>. That includes how we treat race: not as a scientific reality, but as a social lens through which health, opportunity and risk are unequally distributed.<\/p>\n<p>If AI systems learn our shortcuts, they may repeat our mistakes \u2013 faster, at scale and with less accountability. And when lives are on the line, that\u2019s a risk we cannot afford.<!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img decoding=\"async\" data-src=\"https:\/\/counter.theconversation.com\/content\/254416\/count.gif?distributor=republish-lightbox-basic\" alt=\"The Conversation\" width=\"1\" height=\"1\" style=\"--smush-placeholder-width: 1px; --smush-placeholder-aspect-ratio: 1\/1;border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important\" referrerpolicy=\"no-referrer-when-downgrade\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" \/><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: https:\/\/theconversation.com\/republishing-guidelines --><\/p>\n<p><span><a href=\"https:\/\/theconversation.com\/profiles\/tiarna-lee-2368755\">Tiarna Lee<\/a>, Doctoral Candidate, School of Biomedical Engineering &#038; Imaging Sciences, <em><a href=\"https:\/\/theconversation.com\/institutions\/kings-college-london-1196\">King&#8217;s College London<\/a><\/em><\/span><\/p>\n<p>This article is republished from <a href=\"https:\/\/theconversation.com\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters-254416\">original article<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Tiarna Lee, King&#8217;s College London Imagine an AI model that can use a heart scan to guess what racial category you\u2019re likely to be<a class=\"moretag\" href=\"https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/\">Read More&#8230;<\/a><\/p>\n","protected":false},"author":1,"featured_media":8006,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"googlesitekit_rrm_CAowu7GVCw:productID":"","_crdt_document":"","activitypub_content_warning":"","activitypub_content_visibility":"","activitypub_max_image_attachments":3,"activitypub_interaction_policy_quote":"anyone","activitypub_status":"federated","footnotes":""},"categories":[3],"tags":[202,52,4764,53,722,54,10328,4763,14867,3084,726,14868,723],"class_list":["post-8001","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-technology","tag-artificial-general-intelligence","tag-artificial-intelligence","tag-artificial-intelligence-in-healthcare","tag-computational-neuroscience","tag-creative-commons","tag-cybernetics","tag-data-science","tag-ethics-of-artificial-intelligence","tag-explainable-artificial-intelligence","tag-kings-college-london","tag-shutterstock","tag-tiarna-lee","tag-united-kingdom"],"featured_image_src":"https:\/\/mpelembe.net\/wp-content\/uploads\/2025\/05\/file-20250411-56-dfjpk1-1024x576.jpg","blog_images":{"medium":"https:\/\/mpelembe.net\/wp-content\/uploads\/2025\/05\/file-20250411-56-dfjpk1-300x169.jpg","large":"https:\/\/mpelembe.net\/wp-content\/uploads\/2025\/05\/file-20250411-56-dfjpk1-1024x576.jpg"},"ams_acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>AI can guess racial categories from heart scans \u2013 what it means and why it\u00a0matters - Mpelembe Network<\/title>\n<meta name=\"description\" content=\"Research reveals AI models are learning shortcuts, like racial categories, instead of disease, potentially reinforcing bias in healthcare.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI can guess racial categories from heart scans \u2013 what it means and why it\u00a0matters - Mpelembe Network\" \/>\n<meta property=\"og:description\" content=\"Research reveals AI models are learning shortcuts, like racial categories, instead of disease, potentially reinforcing bias in healthcare.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/\" \/>\n<meta property=\"og:site_name\" content=\"Mpelembe Network\" \/>\n<meta property=\"article:published_time\" content=\"2025-05-13T10:14:04+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-05-13T10:15:52+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/mpelembe.net\/wp-content\/uploads\/2025\/05\/file-20250411-56-dfjpk1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1080\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\\\/\"},\"author\":{\"name\":\"admin\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/#\\\/schema\\\/person\\\/2421ebbf3150931b1066b10a196d7608\"},\"headline\":\"AI can guess racial categories from heart scans \u2013 what it means and why it\u00a0matters\",\"datePublished\":\"2025-05-13T10:14:04+00:00\",\"dateModified\":\"2025-05-13T10:15:52+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\\\/\"},\"wordCount\":1052,\"image\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/mpelembe.net\\\/wp-content\\\/uploads\\\/2025\\\/05\\\/file-20250411-56-dfjpk1.jpg\",\"keywords\":[\"Artificial general intelligence\",\"Artificial intelligence\",\"Artificial intelligence in healthcare\",\"Computational neuroscience\",\"Creative Commons\",\"Cybernetics\",\"Data science\",\"Ethics of artificial intelligence\",\"Explainable artificial intelligence\",\"king's college london\",\"SHUTTERSTOCK\",\"Tiarna Lee\",\"United Kingdom\"],\"articleSection\":[\"Technology\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\\\/\",\"url\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\\\/\",\"name\":\"AI can guess racial categories from heart scans \u2013 what it means and why it\u00a0matters - Mpelembe Network\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/mpelembe.net\\\/wp-content\\\/uploads\\\/2025\\\/05\\\/file-20250411-56-dfjpk1.jpg\",\"datePublished\":\"2025-05-13T10:14:04+00:00\",\"dateModified\":\"2025-05-13T10:15:52+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/#\\\/schema\\\/person\\\/2421ebbf3150931b1066b10a196d7608\"},\"description\":\"Research reveals AI models are learning shortcuts, like racial categories, instead of disease, potentially reinforcing bias in healthcare.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\\\/#primaryimage\",\"url\":\"https:\\\/\\\/mpelembe.net\\\/wp-content\\\/uploads\\\/2025\\\/05\\\/file-20250411-56-dfjpk1.jpg\",\"contentUrl\":\"https:\\\/\\\/mpelembe.net\\\/wp-content\\\/uploads\\\/2025\\\/05\\\/file-20250411-56-dfjpk1.jpg\",\"width\":1920,\"height\":1080},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/mpelembe.net\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI can guess racial categories from heart scans \u2013 what it means and why it\u00a0matters\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/#website\",\"url\":\"https:\\\/\\\/mpelembe.net\\\/\",\"name\":\"Mpelembe Network\",\"description\":\"Collaboration Platform\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/mpelembe.net\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/#\\\/schema\\\/person\\\/2421ebbf3150931b1066b10a196d7608\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g\",\"caption\":\"admin\"},\"sameAs\":[\"https:\\\/\\\/mpelembe.net\"],\"url\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/author\\\/admin\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"AI can guess racial categories from heart scans \u2013 what it means and why it\u00a0matters - Mpelembe Network","description":"Research reveals AI models are learning shortcuts, like racial categories, instead of disease, potentially reinforcing bias in healthcare.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/","og_locale":"en_US","og_type":"article","og_title":"AI can guess racial categories from heart scans \u2013 what it means and why it\u00a0matters - Mpelembe Network","og_description":"Research reveals AI models are learning shortcuts, like racial categories, instead of disease, potentially reinforcing bias in healthcare.","og_url":"https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/","og_site_name":"Mpelembe Network","article_published_time":"2025-05-13T10:14:04+00:00","article_modified_time":"2025-05-13T10:15:52+00:00","og_image":[{"width":1920,"height":1080,"url":"https:\/\/mpelembe.net\/wp-content\/uploads\/2025\/05\/file-20250411-56-dfjpk1.jpg","type":"image\/jpeg"}],"author":"admin","twitter_card":"summary_large_image","twitter_misc":{"Written by":"admin","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/#article","isPartOf":{"@id":"https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/"},"author":{"name":"admin","@id":"https:\/\/mpelembe.net\/#\/schema\/person\/2421ebbf3150931b1066b10a196d7608"},"headline":"AI can guess racial categories from heart scans \u2013 what it means and why it\u00a0matters","datePublished":"2025-05-13T10:14:04+00:00","dateModified":"2025-05-13T10:15:52+00:00","mainEntityOfPage":{"@id":"https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/"},"wordCount":1052,"image":{"@id":"https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/#primaryimage"},"thumbnailUrl":"https:\/\/mpelembe.net\/wp-content\/uploads\/2025\/05\/file-20250411-56-dfjpk1.jpg","keywords":["Artificial general intelligence","Artificial intelligence","Artificial intelligence in healthcare","Computational neuroscience","Creative Commons","Cybernetics","Data science","Ethics of artificial intelligence","Explainable artificial intelligence","king's college london","SHUTTERSTOCK","Tiarna Lee","United Kingdom"],"articleSection":["Technology"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/","url":"https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/","name":"AI can guess racial categories from heart scans \u2013 what it means and why it\u00a0matters - Mpelembe Network","isPartOf":{"@id":"https:\/\/mpelembe.net\/#website"},"primaryImageOfPage":{"@id":"https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/#primaryimage"},"image":{"@id":"https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/#primaryimage"},"thumbnailUrl":"https:\/\/mpelembe.net\/wp-content\/uploads\/2025\/05\/file-20250411-56-dfjpk1.jpg","datePublished":"2025-05-13T10:14:04+00:00","dateModified":"2025-05-13T10:15:52+00:00","author":{"@id":"https:\/\/mpelembe.net\/#\/schema\/person\/2421ebbf3150931b1066b10a196d7608"},"description":"Research reveals AI models are learning shortcuts, like racial categories, instead of disease, potentially reinforcing bias in healthcare.","breadcrumb":{"@id":"https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/#primaryimage","url":"https:\/\/mpelembe.net\/wp-content\/uploads\/2025\/05\/file-20250411-56-dfjpk1.jpg","contentUrl":"https:\/\/mpelembe.net\/wp-content\/uploads\/2025\/05\/file-20250411-56-dfjpk1.jpg","width":1920,"height":1080},{"@type":"BreadcrumbList","@id":"https:\/\/mpelembe.net\/index.php\/ai-can-guess-racial-categories-from-heart-scans-what-it-means-and-why-it-matters\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/mpelembe.net\/"},{"@type":"ListItem","position":2,"name":"AI can guess racial categories from heart scans \u2013 what it means and why it\u00a0matters"}]},{"@type":"WebSite","@id":"https:\/\/mpelembe.net\/#website","url":"https:\/\/mpelembe.net\/","name":"Mpelembe Network","description":"Collaboration Platform","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/mpelembe.net\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/mpelembe.net\/#\/schema\/person\/2421ebbf3150931b1066b10a196d7608","name":"admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g","caption":"admin"},"sameAs":["https:\/\/mpelembe.net"],"url":"https:\/\/mpelembe.net\/index.php\/author\/admin\/"}]}},"_links":{"self":[{"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/posts\/8001","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/comments?post=8001"}],"version-history":[{"count":3,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/posts\/8001\/revisions"}],"predecessor-version":[{"id":8013,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/posts\/8001\/revisions\/8013"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/media\/8006"}],"wp:attachment":[{"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/media?parent=8001"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/categories?post=8001"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/tags?post=8001"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}