{"id":1788,"date":"2023-01-31T10:45:51","date_gmt":"2023-01-31T10:45:51","guid":{"rendered":"https:\/\/mpelembe.net\/?p=1788"},"modified":"2023-03-02T14:23:43","modified_gmt":"2023-03-02T14:23:43","slug":"unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth","status":"publish","type":"post","link":"https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/","title":{"rendered":"Unlike with academics and reporters, you can\u2019t check when ChatGPT\u2019s telling the\u00a0truth"},"content":{"rendered":"<p><span><a href=\"https:\/\/theconversation.com\/profiles\/blayne-haggart-192774\">Blayne Haggart<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/brock-university-1340\">Brock University<\/a><\/em><\/span><\/p>\n<p><iframe style=\"width: 100%; height: 100px; border: none; position: relative; z-index: 1;\" allowtransparency=\"\" allow=\"clipboard-read; clipboard-write\" data-src=\"https:\/\/narrations.ad-auris.com\/widget\/the-conversation-canada\/unlike-with-academics-and-reporters--you-can-t-check-when-chatgpt-s-telling-the-truth\" width=\"100%\" height=\"400\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\" data-load-mode=\"1\"><\/iframe><\/p>\n<p>Of all the reactions elicited by ChatGPT, the chatbot from the American for-profit company OpenAI that produces grammatically correct responses to natural-language queries, few have matched those of educators and academics.<\/p>\n<p><!--more--><\/p>\n<p>Academic publishers have moved <a href=\"https:\/\/www.theguardian.com\/science\/2023\/jan\/26\/science-journals-ban-listing-of-chatgpt-as-co-author-on-papers\">to ban ChatGPT from being listed as a co-author and issue strict guidelines outlining the conditions under which it may be used<\/a>. Leading universities and schools around the world, from France\u2019s renowned <a href=\"https:\/\/www.reuters.com\/technology\/top-french-university-bans-use-chatgpt-prevent-plagiarism-2023-01-27\/\">Sciences Po<\/a> to <a href=\"https:\/\/www.theguardian.com\/australia-news\/2023\/jan\/10\/universities-to-return-to-pen-and-paper-exams-after-students-caught-using-ai-to-write-essays\">many Australian universities<\/a>, have banned its use. <\/p>\n<p>These bans are not merely the actions of academics who are worried they won\u2019t be able to catch cheaters. This is not just about catching students who copied a source without attribution. Rather, the severity of these actions reflects a question, one that is not getting enough attention in the endless coverage of OpenAI\u2019s ChatGPT chatbot: Why should we trust anything that it outputs?<\/p>\n<p>This is a vitally important question, as ChatGPT and programs like it can easily be used, with or without acknowledgement, in the information sources that comprise the foundation of our society, especially academia and the news media.<\/p>\n<p>Based on my work on the <a href=\"https:\/\/doi.org\/10.4324\/9781003008309\">political<\/a> <a href=\"https:\/\/doi.org\/10.1007\/978-3-030-14540-8\">economy<\/a> of <a href=\"https:\/\/utorontopress.com\/9781442666221\/copyfight\/\">knowledge governance<\/a>, academic bans on ChatGPT\u2019s use are a proportionate reaction to the threat ChatGPT poses to our entire information ecosystem. Journalists and academics should be wary of using ChatGPT. <\/p>\n<p>Based on its output, ChatGPT might seem like just another information source or tool. However, in reality, ChatGPT \u2014 or, rather the means by which ChatGPT produces its output \u2014 is <a href=\"https:\/\/www.cigionline.org\/articles\/chatgpt-strikes-at-the-heart-of-the-scientific-world-view\/\">a dagger aimed directly at their very credibility as authoritative sources of knowledge<\/a>. It should not be taken lightly.<\/p>\n<h2>Trust and information<\/h2>\n<p>Think about why we see some information sources or types of knowledge as more trusted than others. Since <a href=\"https:\/\/www.britannica.com\/event\/Enlightenment-European-history\">the European Enlightenment<\/a>, we\u2019ve tended to equate scientific knowledge with knowledge in general. <\/p>\n<p>Science is more than laboratory research: it\u2019s a way of thinking that prioritizes empirically based evidence and the pursuit of transparent methods regarding evidence collection and evaluation. And it tends to be the gold standard by which all knowledge is judged.<\/p>\n<p>For example, journalists have credibility because they investigate information, cite sources and provide evidence. Even though sometimes the reporting may contain errors or omissions, that doesn\u2019t change the profession\u2019s authority.<\/p>\n<p>The same goes for opinion editorial writers, especially academics and other experts because they \u2014 we \u2014 draw our authority from our status as experts in a subject. Expertise involves a command of the sources that are recognized as comprising legitimate knowledge in our fields. <\/p>\n<p>Most op-eds aren\u2019t citation-heavy, but responsible academics will be able to point you to the <a href=\"https:\/\/www.bloomsbury.com\/ca\/states-and-markets-9781474236935\/\">thinkers<\/a> and <a href=\"https:\/\/www.jstor.org\/stable\/10.5325\/jinfopoli.7.2017.0176\">the work<\/a> <a href=\"https:\/\/www.penguinrandomhouse.com\/books\/12390\/the-social-construction-of-reality-by-peter-l-berger\/\">they\u2019re<\/a> <a href=\"https:\/\/doi.org\/10.24908\/ss.v12i2.4776\">drawing<\/a> <a href=\"https:\/\/doi.org\/10.1080\/1369118X.2012.678878\">on<\/a>. And those sources themselves are built on verifiable sources that a reader should be able to verify for themselves.<\/p>\n<h2>Truth and outputs<\/h2>\n<p>Because human writers and ChatGPT seem to be producing the same output \u2014 sentences and paragraphs \u2014 it\u2019s understandable that some people may mistakenly confer this scientifically sourced authority onto ChatGPT\u2019s output. <\/p>\n<p>That both ChatGPT and reporters produce sentences is where the similarity ends. What\u2019s most important \u2014 the source of authority \u2014 is not <em>what<\/em> they produce, but <em>how<\/em> they produce it.<\/p>\n<p>ChatGPT doesn\u2019t produce sentences in the same way a reporter does. ChatGPT, and other machine-learning, large language models, may seem sophisticated, but they\u2019re basically just complex autocomplete machines. Only instead of suggesting the next word in an email, they produce the most statistically likely words in much longer packages. <\/p>\n<p>These programs repackage others\u2019 work as if it were something new. It does not \u201cunderstand\u201d what it produces. <\/p>\n<p>The justification for these outputs can never be truth. Its truth is the truth of the correlation, that the word \u201csentence\u201d should always complete the phrase \u201cWe finish each other\u2019s \u2026\u201d because it is the most common occurrence, not because it is expressing anything that has been observed.<\/p>\n<p>Because ChatGPT\u2019s truth is only a statistical truth, output produced by this program cannot ever be trusted in the same way that we can trust a reporter or an academic\u2019s output. It cannot be verified because it has been constructed to create output in a different way than what we usually think of as being \u201cscientific.\u201d  <\/p>\n<p>You can\u2019t check ChatGPT\u2019s sources because the source is the statistical fact that most of the time, a set of words tend to follow each other.<\/p>\n<p>No matter how coherent ChatGPT\u2019s output may seem, simply publishing what it produces is still the equivalent of letting autocomplete run wild. It\u2019s an irresponsible practice because it pretends that these statistical tricks are equivalent to well-sourced and verified knowledge.<\/p>\n<p>Similarly, academics and others who incorporate ChatGPT into their workflow run the existential risk of kicking the entire edifice of scientific knowledge out from underneath themselves. <\/p>\n<p>Because ChatGPT\u2019s output is correlation-based, how does the writer know that it is accurate? Did they verify it against actual sources, or does the output simply conform to their personal prejudices? And if they\u2019re experts in their field, why are they using ChatGPT in the first place?<\/p>\n<figure class=\"align-center zoomable\">\n            <a href=\"https:\/\/images.theconversation.com\/files\/506942\/original\/file-20230129-36877-sutezm.jpg?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=1000&#038;fit=clip\"><img decoding=\"async\" alt=\"a man gives a lecture while reading from two laptop screens\" data-src=\"https:\/\/images.theconversation.com\/files\/506942\/original\/file-20230129-36877-sutezm.jpg?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=754&#038;fit=clip\" data-srcset=\"https:\/\/images.theconversation.com\/files\/506942\/original\/file-20230129-36877-sutezm.jpg?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=600&#038;h=304&#038;fit=crop&#038;dpr=1 600w, https:\/\/images.theconversation.com\/files\/506942\/original\/file-20230129-36877-sutezm.jpg?ixlib=rb-1.1.0&#038;q=30&#038;auto=format&#038;w=600&#038;h=304&#038;fit=crop&#038;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/506942\/original\/file-20230129-36877-sutezm.jpg?ixlib=rb-1.1.0&#038;q=15&#038;auto=format&#038;w=600&#038;h=304&#038;fit=crop&#038;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/506942\/original\/file-20230129-36877-sutezm.jpg?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=754&#038;h=382&#038;fit=crop&#038;dpr=1 754w, https:\/\/images.theconversation.com\/files\/506942\/original\/file-20230129-36877-sutezm.jpg?ixlib=rb-1.1.0&#038;q=30&#038;auto=format&#038;w=754&#038;h=382&#038;fit=crop&#038;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/506942\/original\/file-20230129-36877-sutezm.jpg?ixlib=rb-1.1.0&#038;q=15&#038;auto=format&#038;w=754&#038;h=382&#038;fit=crop&#038;dpr=3 2262w\" data-sizes=\"(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\"><\/a><figcaption>\n              <span class=\"caption\">Academics have authority on their subject of expertise because there exists a scientific and evidence-based method to verify their work.<\/span><br \/>\n              <span class=\"attribution\"><span class=\"source\">(Shutterstock)<\/span><\/span><br \/>\n            <\/figcaption><\/figure>\n<h2>Knowledge production and verification<\/h2>\n<p>The point is that ChatGPT\u2019s processes give us no way to verify its truthfulness. In contrast, that reporters and academics have a scientific, evidence-based method of producing knowledge serves to validate their work, even if the results might go against our preconceived notions.<\/p>\n<p>The problem is especially acute for academics, given our central role in creating knowledge. Relying on ChatGPT to write even part of a column means they\u2019re no longer relying on the scientific authority embedded in verified sources. <\/p>\n<p>Instead, by resorting to statistically generated text, they are effectively making an argument from authority. Such actions also mislead the reader, because the reader can\u2019t distinguish between text by an author and an AI.<\/p>\n<p>ChatGPT may produce seemingly legible knowledge, as if by magic. But we would be well advised not to mistake its output for actual, scientific knowledge. One should never confuse coherence with understanding.<\/p>\n<p>ChatGPT promises easy access to new and existing knowledge, but it is a poisoned chalice. Readers, academics and reporters beware.<\/p>\n<p><span><a href=\"https:\/\/theconversation.com\/profiles\/blayne-haggart-192774\">Blayne Haggart<\/a>, Associate Professor of Political Science, <em><a href=\"https:\/\/theconversation.com\/institutions\/brock-university-1340\">Brock University<\/a><\/em><\/span><\/p>\n<p>This article is republished from <a href=\"https:\/\/theconversation.com\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth-198463\">original article<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Blayne Haggart, Brock University Of all the reactions elicited by ChatGPT, the chatbot from the American for-profit company OpenAI that produces grammatically correct responses<a class=\"moretag\" href=\"https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/\">Read More&#8230;<\/a><\/p>\n","protected":false},"author":1,"featured_media":1789,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"googlesitekit_rrm_CAowu7GVCw:productID":"","_crdt_document":"","activitypub_content_warning":"","activitypub_content_visibility":"","activitypub_max_image_attachments":3,"activitypub_interaction_policy_quote":"anyone","activitypub_status":"","footnotes":""},"categories":[12],"tags":[50,3023,4383,2223,4274,722,4035,4380,1399,3761,2228,4381,212,3762,3755,726,4382],"class_list":["post-1788","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-media","tag-articles","tag-assumption","tag-blayne-haggart","tag-branches-of-philosophy","tag-chatgpt","tag-creative-commons","tag-credibility","tag-epistemology","tag-france","tag-knowledge","tag-metaphysics","tag-ontology","tag-philosophy-of-mind","tag-research","tag-scientific-method","tag-shutterstock","tag-truth"],"featured_image_src":"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/01\/file-20230125-18-6yb4mn-1024x683.jpg","blog_images":{"medium":"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/01\/file-20230125-18-6yb4mn-300x200.jpg","large":"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/01\/file-20230125-18-6yb4mn-1024x683.jpg"},"ams_acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Unlike with academics and reporters, you can\u2019t check when ChatGPT\u2019s telling the\u00a0truth - Mpelembe Network<\/title>\n<meta name=\"description\" content=\"ChatGPT is a sophisticated AI program that generates text from vast databases. But it doesn\u2019t understand the information it produces, which also can\u2019t be verified through scientific means.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Unlike with academics and reporters, you can\u2019t check when ChatGPT\u2019s telling the\u00a0truth - Mpelembe Network\" \/>\n<meta property=\"og:description\" content=\"ChatGPT is a sophisticated AI program that generates text from vast databases. But it doesn\u2019t understand the information it produces, which also can\u2019t be verified through scientific means.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/\" \/>\n<meta property=\"og:site_name\" content=\"Mpelembe Network\" \/>\n<meta property=\"article:published_time\" content=\"2023-01-31T10:45:51+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-03-02T14:23:43+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/01\/file-20230125-18-6yb4mn-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1707\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\\\/\"},\"author\":{\"name\":\"admin\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/#\\\/schema\\\/person\\\/2421ebbf3150931b1066b10a196d7608\"},\"headline\":\"Unlike with academics and reporters, you can\u2019t check when ChatGPT\u2019s telling the\u00a0truth\",\"datePublished\":\"2023-01-31T10:45:51+00:00\",\"dateModified\":\"2023-03-02T14:23:43+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\\\/\"},\"wordCount\":1087,\"image\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/mpelembe.net\\\/wp-content\\\/uploads\\\/2023\\\/01\\\/file-20230125-18-6yb4mn-scaled.jpg\",\"keywords\":[\"Articles\",\"Assumption\",\"Blayne Haggart\",\"Branches of philosophy\",\"ChatGPT\",\"Creative Commons\",\"Credibility\",\"Epistemology\",\"France\",\"Knowledge\",\"Metaphysics\",\"Ontology\",\"Philosophy of mind\",\"Research\",\"Scientific method\",\"SHUTTERSTOCK\",\"Truth\"],\"articleSection\":[\"Media\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\\\/\",\"url\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\\\/\",\"name\":\"Unlike with academics and reporters, you can\u2019t check when ChatGPT\u2019s telling the\u00a0truth - Mpelembe Network\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/mpelembe.net\\\/wp-content\\\/uploads\\\/2023\\\/01\\\/file-20230125-18-6yb4mn-scaled.jpg\",\"datePublished\":\"2023-01-31T10:45:51+00:00\",\"dateModified\":\"2023-03-02T14:23:43+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/#\\\/schema\\\/person\\\/2421ebbf3150931b1066b10a196d7608\"},\"description\":\"ChatGPT is a sophisticated AI program that generates text from vast databases. But it doesn\u2019t understand the information it produces, which also can\u2019t be verified through scientific means.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\\\/#primaryimage\",\"url\":\"https:\\\/\\\/mpelembe.net\\\/wp-content\\\/uploads\\\/2023\\\/01\\\/file-20230125-18-6yb4mn-scaled.jpg\",\"contentUrl\":\"https:\\\/\\\/mpelembe.net\\\/wp-content\\\/uploads\\\/2023\\\/01\\\/file-20230125-18-6yb4mn-scaled.jpg\",\"width\":2560,\"height\":1707},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/mpelembe.net\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Unlike with academics and reporters, you can\u2019t check when ChatGPT\u2019s telling the\u00a0truth\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/#website\",\"url\":\"https:\\\/\\\/mpelembe.net\\\/\",\"name\":\"Mpelembe Network\",\"description\":\"Collaboration Platform\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/mpelembe.net\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/#\\\/schema\\\/person\\\/2421ebbf3150931b1066b10a196d7608\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g\",\"caption\":\"admin\"},\"sameAs\":[\"https:\\\/\\\/mpelembe.net\"],\"url\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/author\\\/admin\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Unlike with academics and reporters, you can\u2019t check when ChatGPT\u2019s telling the\u00a0truth - Mpelembe Network","description":"ChatGPT is a sophisticated AI program that generates text from vast databases. But it doesn\u2019t understand the information it produces, which also can\u2019t be verified through scientific means.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/","og_locale":"en_US","og_type":"article","og_title":"Unlike with academics and reporters, you can\u2019t check when ChatGPT\u2019s telling the\u00a0truth - Mpelembe Network","og_description":"ChatGPT is a sophisticated AI program that generates text from vast databases. But it doesn\u2019t understand the information it produces, which also can\u2019t be verified through scientific means.","og_url":"https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/","og_site_name":"Mpelembe Network","article_published_time":"2023-01-31T10:45:51+00:00","article_modified_time":"2023-03-02T14:23:43+00:00","og_image":[{"width":2560,"height":1707,"url":"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/01\/file-20230125-18-6yb4mn-scaled.jpg","type":"image\/jpeg"}],"author":"admin","twitter_card":"summary_large_image","twitter_misc":{"Written by":"admin","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/#article","isPartOf":{"@id":"https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/"},"author":{"name":"admin","@id":"https:\/\/mpelembe.net\/#\/schema\/person\/2421ebbf3150931b1066b10a196d7608"},"headline":"Unlike with academics and reporters, you can\u2019t check when ChatGPT\u2019s telling the\u00a0truth","datePublished":"2023-01-31T10:45:51+00:00","dateModified":"2023-03-02T14:23:43+00:00","mainEntityOfPage":{"@id":"https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/"},"wordCount":1087,"image":{"@id":"https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/#primaryimage"},"thumbnailUrl":"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/01\/file-20230125-18-6yb4mn-scaled.jpg","keywords":["Articles","Assumption","Blayne Haggart","Branches of philosophy","ChatGPT","Creative Commons","Credibility","Epistemology","France","Knowledge","Metaphysics","Ontology","Philosophy of mind","Research","Scientific method","SHUTTERSTOCK","Truth"],"articleSection":["Media"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/","url":"https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/","name":"Unlike with academics and reporters, you can\u2019t check when ChatGPT\u2019s telling the\u00a0truth - Mpelembe Network","isPartOf":{"@id":"https:\/\/mpelembe.net\/#website"},"primaryImageOfPage":{"@id":"https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/#primaryimage"},"image":{"@id":"https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/#primaryimage"},"thumbnailUrl":"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/01\/file-20230125-18-6yb4mn-scaled.jpg","datePublished":"2023-01-31T10:45:51+00:00","dateModified":"2023-03-02T14:23:43+00:00","author":{"@id":"https:\/\/mpelembe.net\/#\/schema\/person\/2421ebbf3150931b1066b10a196d7608"},"description":"ChatGPT is a sophisticated AI program that generates text from vast databases. But it doesn\u2019t understand the information it produces, which also can\u2019t be verified through scientific means.","breadcrumb":{"@id":"https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/#primaryimage","url":"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/01\/file-20230125-18-6yb4mn-scaled.jpg","contentUrl":"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/01\/file-20230125-18-6yb4mn-scaled.jpg","width":2560,"height":1707},{"@type":"BreadcrumbList","@id":"https:\/\/mpelembe.net\/index.php\/unlike-with-academics-and-reporters-you-cant-check-when-chatgpts-telling-the-truth\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/mpelembe.net\/"},{"@type":"ListItem","position":2,"name":"Unlike with academics and reporters, you can\u2019t check when ChatGPT\u2019s telling the\u00a0truth"}]},{"@type":"WebSite","@id":"https:\/\/mpelembe.net\/#website","url":"https:\/\/mpelembe.net\/","name":"Mpelembe Network","description":"Collaboration Platform","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/mpelembe.net\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/mpelembe.net\/#\/schema\/person\/2421ebbf3150931b1066b10a196d7608","name":"admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g","caption":"admin"},"sameAs":["https:\/\/mpelembe.net"],"url":"https:\/\/mpelembe.net\/index.php\/author\/admin\/"}]}},"_links":{"self":[{"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/posts\/1788","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/comments?post=1788"}],"version-history":[{"count":1,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/posts\/1788\/revisions"}],"predecessor-version":[{"id":1790,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/posts\/1788\/revisions\/1790"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/media\/1789"}],"wp:attachment":[{"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/media?parent=1788"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/categories?post=1788"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/tags?post=1788"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}