{"id":2248,"date":"2023-04-03T09:02:27","date_gmt":"2023-04-03T09:02:27","guid":{"rendered":"https:\/\/mpelembe.net\/?p=2248"},"modified":"2023-04-18T14:28:56","modified_gmt":"2023-04-18T14:28:56","slug":"ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why","status":"publish","type":"post","link":"https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/","title":{"rendered":"AI will soon become impossible for humans to comprehend \u2013 the story of neural networks tells us\u00a0why"},"content":{"rendered":"<p><span><a href=\"https:\/\/theconversation.com\/profiles\/david-beer-149528\">David Beer<\/a>, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-york-1344\">University of York<\/a><\/em><\/span><\/p>\n<p>In 1956, during a year-long trip to London and in his early 20s, the mathematician and theoretical biologist Jack D. Cowan visited Wilfred Taylor and his strange new \u201c<a href=\"https:\/\/users.sussex.ac.uk\/%7Ephilh\/pubs\/CowanInterview.pdf\">learning machine<\/a>\u201d. On his arrival he was baffled by the \u201chuge bank of apparatus\u201d that confronted him. Cowan could only stand by and watch \u201cthe machine doing its thing\u201d. The thing it appeared to be doing was performing an \u201cassociative memory scheme\u201d \u2013 it seemed to be able to learn how to find connections and retrieve data.<\/p>\n<p><!--more--><\/p>\n<p>It may have looked like clunky blocks of circuitry, soldered together by hand in a mass of wires and boxes, but what Cowan was witnessing was an early analogue form of a neural network \u2013 a precursor to the most advanced artificial intelligence of today, including the much discussed <a href=\"https:\/\/theconversation.com\/uk\/topics\/chatgpt-130961\">ChatGPT<\/a> with its ability to generate written content in response to almost any command. ChatGPT\u2019s underlying technology is a neural network. <\/p>\n<p>As Cowan and Taylor stood and watched the machine work, they really had no idea exactly how it was managing to perform this task. The answer to Taylor\u2019s mystery machine brain can be found somewhere in its \u201canalog neurons\u201d, in the associations made by its machine memory and, most importantly, in the fact that its automated functioning couldn\u2019t really be fully explained. It would take decades for these systems to find their purpose and for that power to be unlocked.<\/p>\n<figure class=\"align-right zoomable\">\n            <a href=\"https:\/\/images.theconversation.com\/files\/516907\/original\/file-20230322-14-eoyezh.jpeg?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=1000&#038;fit=clip\"><img decoding=\"async\" alt=\"Black and white image of a man sitting down.\" data-src=\"https:\/\/images.theconversation.com\/files\/516907\/original\/file-20230322-14-eoyezh.jpeg?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=237&#038;fit=clip\" data-srcset=\"https:\/\/images.theconversation.com\/files\/516907\/original\/file-20230322-14-eoyezh.jpeg?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=600&#038;h=456&#038;fit=crop&#038;dpr=1 600w, https:\/\/images.theconversation.com\/files\/516907\/original\/file-20230322-14-eoyezh.jpeg?ixlib=rb-1.1.0&#038;q=30&#038;auto=format&#038;w=600&#038;h=456&#038;fit=crop&#038;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/516907\/original\/file-20230322-14-eoyezh.jpeg?ixlib=rb-1.1.0&#038;q=15&#038;auto=format&#038;w=600&#038;h=456&#038;fit=crop&#038;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/516907\/original\/file-20230322-14-eoyezh.jpeg?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=754&#038;h=573&#038;fit=crop&#038;dpr=1 754w, https:\/\/images.theconversation.com\/files\/516907\/original\/file-20230322-14-eoyezh.jpeg?ixlib=rb-1.1.0&#038;q=30&#038;auto=format&#038;w=754&#038;h=573&#038;fit=crop&#038;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/516907\/original\/file-20230322-14-eoyezh.jpeg?ixlib=rb-1.1.0&#038;q=15&#038;auto=format&#038;w=754&#038;h=573&#038;fit=crop&#038;dpr=3 2262w\" data-sizes=\"(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\"><\/a><figcaption>\n              <span class=\"caption\">Jack Cowan, who played a key part in the development of neural networks from the 1950s onwards.<\/span><br \/>\n              <span class=\"attribution\"><a class=\"source\" href=\"https:\/\/photoarchive.lib.uchicago.edu\/db.xqy?one=apf1-12037.xml\">University of Chicago Photographic Archive, Hanna Holborn Gray Special Collections Research Center.<\/a><\/span><br \/>\n            <\/figcaption><\/figure>\n<p>The term neural network incorporates a wide range of systems, yet centrally, <a href=\"https:\/\/www.ibm.com\/topics\/neural-networks\">according to IBM<\/a>, these \u201cneural networks \u2013 also known as artificial neural networks (ANNs) or simulated neural networks (SNNs) \u2013 are a subset of machine learning and are at the heart of deep learning algorithms\u201d. Crucially, the term itself and their form and \u201cstructure are inspired by the human brain, mimicking the way that biological neurons signal to one another\u201d.<\/p>\n<p>There may have been some residual doubt of their value in its initial stages, but as the years have passed AI fashions have swung firmly towards neural networks. They are now often understood to be the future of AI. They have big implications for us and for what it means to be human. We have heard <a href=\"https:\/\/techcrunch.com\/2023\/03\/28\/1100-notable-signatories-just-signed-an-open-letter-asking-all-ai-labs-to-immediately-pause-for-at-least-6-months\/\">echoes of these concerns recently<\/a> with calls to pause new AI developments for a six month period to ensure confidence in their implications. <\/p>\n<hr>\n<figure class=\"align-right \">\n            <img decoding=\"async\" alt=\"\" data-src=\"https:\/\/images.theconversation.com\/files\/288776\/original\/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=237&#038;fit=clip\" data-srcset=\"https:\/\/images.theconversation.com\/files\/288776\/original\/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=600&#038;h=600&#038;fit=crop&#038;dpr=1 600w, https:\/\/images.theconversation.com\/files\/288776\/original\/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&#038;q=30&#038;auto=format&#038;w=600&#038;h=600&#038;fit=crop&#038;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/288776\/original\/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&#038;q=15&#038;auto=format&#038;w=600&#038;h=600&#038;fit=crop&#038;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/288776\/original\/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=754&#038;h=754&#038;fit=crop&#038;dpr=1 754w, https:\/\/images.theconversation.com\/files\/288776\/original\/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&#038;q=30&#038;auto=format&#038;w=754&#038;h=754&#038;fit=crop&#038;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/288776\/original\/file-20190820-170910-8bv1s7.png?ixlib=rb-1.1.0&#038;q=15&#038;auto=format&#038;w=754&#038;h=754&#038;fit=crop&#038;dpr=3 2262w\" data-sizes=\"(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\"><figcaption>\n              <span class=\"caption\"><\/span><\/p>\n<\/figcaption><\/figure>\n<p><strong><em>This article is part of Conversation Insights<\/em><\/strong><br \/>\n<br \/><em>The Insights team generates <a href=\"https:\/\/theconversation.com\/uk\/topics\/insights-series-71218\">long-form journalism<\/a> derived from interdisciplinary research. The team is working with academics from different backgrounds who have been engaged in projects aimed at tackling societal and scientific challenges.<\/em><\/p>\n<hr>\n<p>It would certainly be a mistake to dismiss the neural network as being solely about glossy, eye-catching new gadgets. They are already well established in our lives. Some are powerful in their practicality. As far back as 1989, a team led by Yann LeCun at AT&#038;T Bell Laboratories used back-propagation techniques to train a system to <a href=\"https:\/\/www.ibm.com\/topics\/neural-networks\">recognise handwritten postal codes<\/a>. The recent <a href=\"https:\/\/blogs.microsoft.com\/blog\/2023\/02\/07\/reinventing-search-with-a-new-ai-powered-microsoft-bing-and-edge-your-copilot-for-the-web\/\">announcement by Microsoft<\/a> that Bing searches will be powered by AI, making it your \u201ccopilot for the web\u201d, illustrates how the things we discover and how we understand them will increasingly be a product of this type of automation.<\/p>\n<\/p>\n<p>Drawing on vast data to find patterns AI can similarly be trained to do things like image recognition at speed \u2013 resulting in them being incorporated into <a href=\"https:\/\/patents.google.com\/patent\/US7295687B2\/en\">facial recognition<\/a>, for instance. This ability to identify patterns has led to many other applications, such as <a href=\"https:\/\/journalofbigdata.springeropen.com\/articles\/10.1186\/s40537-020-00333-6\">predicting stock markets<\/a>.<\/p>\n<p>Neural networks are changing how we interpret and communicate too. Developed by the interestingly titled <a href=\"https:\/\/g.co\/brain\">Google Brain Team<\/a>, <a href=\"https:\/\/ai.googleblog.com\/2016\/09\/a-neural-network-for-machine.html\">Google Translate<\/a> is another prominent application of a neural network. <\/p>\n<p>You wouldn\u2019t want to play Chess or Shogi with one either. Their grasp of rules and their recall of strategies and all recorded moves means that they are exceptionally good at games (although ChatGPT seems to <a href=\"https:\/\/theconversation.com\/chatgpt-struggles-with-wordle-puzzles-which-says-a-lot-about-how-it-works-201906\">struggle with Wordle<\/a>). The systems that are troubling human Go players (Go is a notoriously tricky strategy board game) and Chess grandmasters, are <a href=\"https:\/\/www.deepmind.com\/blog\/alphazero-shedding-new-light-on-chess-shogi-and-go\">made from neural networks<\/a>.<\/p>\n<p>But their reach goes far beyond these instances and continues to expand. A search of patents restricted only to mentions of the exact phrase \u201cneural networks\u201d produces 135,828 results. With this rapid and ongoing expansion, the chances of us being able to fully explain the influence of AI may become ever thinner. These are the questions I have been examining in my research <a href=\"https:\/\/bristoluniversitypress.co.uk\/the-tensions-of-algorithmic-thinking\">and my new book on algorithmic thinking<\/a>.<\/p>\n<h2>Mysterious layers of \u2018unknowability\u2019<\/h2>\n<p>Looking back at the history of neural networks tells us something important about the automated decisions that define our present or those that will have a possibly more profound impact in the future. Their presence also tells us that we are likely to understand the decisions and impacts of AI even less over time. These systems are not simply black boxes, they are not just hidden bits of a system that can\u2019t be seen or understood.<\/p>\n<p>It is something different, something rooted in the aims and design of these systems themselves. There is a long-held pursuit of the unexplainable. The more opaque, the more authentic and advanced the system is thought to be. It is not just about the systems becoming more complex or the control of intellectual property limiting access (although these are part of it). It is instead to say that the ethos driving them has a particular and embedded interest in \u201cunknowability\u201d. The mystery is even coded into the very form and discourse of the neural network. They come with deeply piled layers \u2013 hence the phrase deep learning \u2013 and within those depths are the even more mysterious sounding \u201chidden layers\u201d. The mysteries of these systems are deep below the surface.<\/p>\n<p>There is a good chance that the greater the impact that artificial intelligence comes to have in our lives the less we will understand how or why. Today there is a strong push for AI that is explainable. We want to know how it works and how it arrives at decisions and outcomes. The EU is so concerned by the potentially \u201cunacceptable risks\u201d and even \u201cdangerous\u201d applications that it is currently advancing <a href=\"https:\/\/artificialintelligenceact.eu\">a new AI Act<\/a> intended to set a \u201cglobal standard\u201d for \u201cthe development of secure, trustworthy and ethical artificial intelligence\u201d.<\/p>\n<p>Those new laws will be based on a need for explainability, <a href=\"https:\/\/eur-lex.europa.eu\/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02\/DOC_1&#038;format=PDF\">demanding that<\/a> \u201cfor high-risk AI systems, the requirements of high quality data, documentation and traceability, transparency, human oversight, accuracy and robustness, are strictly necessary to mitigate the risks to fundamental rights and safety posed by AI\u201d. This is not just about things like self-driving cars (although systems that ensure safety fall into the EU\u2019s category of high risk AI), it is also a worry that systems will emerge in the future that will have implications for human rights. <\/p>\n<p>This is part of wider calls for transparency in AI so that its activities can be checked, audited and assessed. Another example would be the Royal Society\u2019s <a href=\"https:\/\/royalsociety.org\/-\/media\/policy\/projects\/explainable-ai\/AI-and-interpretability-policy-briefing.pdf\">policy briefing on explainable AI<\/a> in which they point out that \u201cpolicy debates across the world increasingly see calls for some form of AI explainability, as part of efforts to embed ethical principles into the design and deployment of AI-enabled systems\u201d.<\/p>\n<p>But the story of neural networks tells us that we are likely to get further away from that objective in the future, rather than closer to it.<\/p>\n<h2>Inspired by the human brain<\/h2>\n<p>These neural networks may be complex systems yet they have some core principles. Inspired by the human brain, they seek to copy or simulate forms of biological and human thinking. In terms of structure and design they are, as <a href=\"https:\/\/www.ibm.com\/topics\/neural-networks\">IBM also explains<\/a>, comprised of \u201cnode layers, containing an input layer, one or more hidden layers, and an output layer\u201d. Within this, \u201ceach node, or artificial neuron, connects to another\u201d. Because they require inputs and information to create outputs they \u201crely on training data to learn and improve their accuracy over time\u201d. These technical details matter but so too does the wish to model these systems on the complexities of the human brain.<\/p>\n<p>Grasping the ambition behind these systems is vital in understanding what these technical details have come to mean in practice. In a <a href=\"https:\/\/mitpress.mit.edu\/9780262511117\/talking-nets\/\">1993 interview<\/a>, the neural network scientist Teuvo Kohonen concluded that a \u201cself-organising\u201d system \u201cis my dream\u201d, operating \u201csomething like what our nervous system is doing instinctively\u201d. As an example, Kohonen pictured how a \u201cself-organising\u201d system, a system that monitored and managed itself, \u201ccould be used as a monitoring panel for any machine \u2026 in every airplane, jet plane, or every nuclear power station, or every car\u201d. This, he thought, would mean that in the future \u201cyou could see immediately what condition the system is in\u201d. <\/p>\n<figure class=\"align-center \">\n            <img decoding=\"async\" alt=\"A group of male scientists around an old computer.\" data-src=\"https:\/\/images.theconversation.com\/files\/516910\/original\/file-20230322-14-7n0y2x.jpg?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=754&#038;fit=clip\" data-srcset=\"https:\/\/images.theconversation.com\/files\/516910\/original\/file-20230322-14-7n0y2x.jpg?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=600&#038;h=446&#038;fit=crop&#038;dpr=1 600w, https:\/\/images.theconversation.com\/files\/516910\/original\/file-20230322-14-7n0y2x.jpg?ixlib=rb-1.1.0&#038;q=30&#038;auto=format&#038;w=600&#038;h=446&#038;fit=crop&#038;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/516910\/original\/file-20230322-14-7n0y2x.jpg?ixlib=rb-1.1.0&#038;q=15&#038;auto=format&#038;w=600&#038;h=446&#038;fit=crop&#038;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/516910\/original\/file-20230322-14-7n0y2x.jpg?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=754&#038;h=561&#038;fit=crop&#038;dpr=1 754w, https:\/\/images.theconversation.com\/files\/516910\/original\/file-20230322-14-7n0y2x.jpg?ixlib=rb-1.1.0&#038;q=30&#038;auto=format&#038;w=754&#038;h=561&#038;fit=crop&#038;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/516910\/original\/file-20230322-14-7n0y2x.jpg?ixlib=rb-1.1.0&#038;q=15&#038;auto=format&#038;w=754&#038;h=561&#038;fit=crop&#038;dpr=3 2262w\" data-sizes=\"(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\"><figcaption>\n              <span class=\"caption\">Early computing often involved a large apparatus of assembled parts.<\/span><br \/>\n              <span class=\"attribution\"><span class=\"source\">Aalto University Archives<\/span><\/span><br \/>\n            <\/figcaption><\/figure>\n<p>The overarching objective was to have a system capable of adapting to its surroundings. It would be instant and autonomous, operating in the style of the nervous system. That was the dream, to have systems that could handle themselves without the need for much human intervention. The complexities and unknowns of the brain, the nervous system and the real world would soon come to inform the development and design of neural networks.<\/p>\n<h2>\u2018Something fishy about it\u2019<\/h2>\n<p>But jumping back to 1956 and that strange learning machine, it was the hands-on approach that Taylor had taken when building it that immediately caught Cowan\u2019s attention. He had clearly sweated over the assembly of the bits and pieces. Taylor, <a href=\"https:\/\/mitpress.mit.edu\/9780262511117\/talking-nets\/\">Cowan observed<\/a> during an interview on his own part in the story of these systems, \u201cdidn\u2019t do it by theory, and he didn\u2019t do it on a computer\u201d. Instead, with tools in hand, he \u201cactually built the hardware\u201d. It was a material thing, a combination of parts, perhaps even a contraption. And it was \u201call done with analogue circuitry\u201d taking Taylor, Cowan notes, \u201cseveral years to build it and to play with it\u201d. A case of trial and error.<\/p>\n<p>Understandably Cowan wanted to get to grips with what he was seeing. He tried to get Taylor to explain this learning machine to him. The clarifications didn\u2019t come. Cowan couldn\u2019t get Taylor to describe to him how the thing worked. The analogue neurons remained a mystery. The more surprising problem, Cowan thought, was that Taylor \u201cdidn\u2019t really understand himself what was going on\u201d. This wasn\u2019t just a momentary breakdown in communication between the two scientists with different specialisms, it was more than that.<\/p>\n<p>In an <a href=\"https:\/\/mitpress.mit.edu\/9780262511117\/talking-nets\/\">interview from the mid-1990s<\/a>, thinking back to Taylor\u2019s machine, Cowan revealed that \u201cto this day in published papers you can\u2019t quite understand how it works\u201d. This conclusion is suggestive of how the unknown is deeply embedded in neural networks. The unexplainability of these neural systems has been present even from the fundamental and developmental stages dating back nearly seven decades. <\/p>\n<p>This mystery remains today and is to be found within advancing forms of AI. The unfathomability of the functioning of the associations made by Taylor\u2019s machine led Cowan to wonder if there was \u201csomething fishy about it\u201d.<\/p>\n<h2>Long and tangled roots<\/h2>\n<p>Cowan referred back to his brief visit with Taylor when asked about the reception of his own work some years later. Into the 1960s people were, Cowan reflected, \u201ca little slow to see the point of an analogue neural network\u201d. This was despite, Cowan recalls, Taylor\u2019s 1950s work on \u201cassociative memory\u201d being based on \u201canalog neurons\u201d. The Nobel Prize-winning neural systems expert, <a href=\"https:\/\/mitpress.mit.edu\/9780262511117\/talking-nets\/\">Leon N. Cooper, concluded<\/a> that developments around the application of the brain model in the 1960s, were regarded \u201cas among the deep mysteries\u201d. Because of this uncertainty there remained a scepticism about what a neural network might achieve. But things slowly began to change.<\/p>\n<p>Some 30 years ago the neuroscientist Walter J. Freeman, who was surprised by the \u201c<a href=\"https:\/\/mitpress.mit.edu\/9780262511117\/talking-nets\/\">remarkable<\/a>\u201d range of applications that had been found for neural networks, was already commenting on the fact that he didn\u2019t see them as \u201ca fundamentally new kind of machine\u201d. They were a slow burn, with the technology coming first and then subsequent applications being found for it. This took time. Indeed, to find the roots of neural network technology we might head back even further than Cowan\u2019s visit to Taylor\u2019s mysterious machine. <\/p>\n<p>The neural net scientist James Anderson and the science journalist Edward Rosenfeld <a href=\"https:\/\/mitpress.mit.edu\/9780262511117\/talking-nets\/\">have noted<\/a> that the background to neural networks goes back into the 1940s and some early attempts to, as they describe, \u201cunderstand the human nervous systems and to build artificial systems that act the way we do, at least a little bit\u201d. And so, in the 1940s, the mysteries of the human nervous system also became the mysteries of computational thinking and artificial intelligence.<\/p>\n<p>Summarising this long story, the computer science writer <a href=\"https:\/\/news.mit.edu\/2017\/explained-neural-networks-deep-learning-0414\">Larry Hardesty has pointed out<\/a> that deep learning in the form of neural networks \u201chave been going in and out of fashion for more than 70 years\u201d. More specifically, he adds, these \u201cneural networks were first proposed in 1944 by Warren McCulloch and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what\u2019s sometimes called the first cognitive science department\u201d.<\/p>\n<figure class=\"align-right \">\n            <img decoding=\"async\" alt=\"Black and white image of two men\" data-src=\"https:\/\/images.theconversation.com\/files\/516916\/original\/file-20230322-28-33idd6.png?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=237&#038;fit=clip\" data-srcset=\"https:\/\/images.theconversation.com\/files\/516916\/original\/file-20230322-28-33idd6.png?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=600&#038;h=572&#038;fit=crop&#038;dpr=1 600w, https:\/\/images.theconversation.com\/files\/516916\/original\/file-20230322-28-33idd6.png?ixlib=rb-1.1.0&#038;q=30&#038;auto=format&#038;w=600&#038;h=572&#038;fit=crop&#038;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/516916\/original\/file-20230322-28-33idd6.png?ixlib=rb-1.1.0&#038;q=15&#038;auto=format&#038;w=600&#038;h=572&#038;fit=crop&#038;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/516916\/original\/file-20230322-28-33idd6.png?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=754&#038;h=718&#038;fit=crop&#038;dpr=1 754w, https:\/\/images.theconversation.com\/files\/516916\/original\/file-20230322-28-33idd6.png?ixlib=rb-1.1.0&#038;q=30&#038;auto=format&#038;w=754&#038;h=718&#038;fit=crop&#038;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/516916\/original\/file-20230322-28-33idd6.png?ixlib=rb-1.1.0&#038;q=15&#038;auto=format&#038;w=754&#038;h=718&#038;fit=crop&#038;dpr=3 2262w\" data-sizes=\"(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\"><figcaption>\n              <span class=\"caption\">The inventors of the neural network Walter Pitts and Warren McCulloch pictured here in 1949.<\/span><br \/>\n              <span class=\"attribution\"><a class=\"source\" href=\"https:\/\/www.semanticscholar.org\/paper\/On-the-legacy-of-W.S.-McCulloch-Moreno-D%C3%ADaz-Moreno-D%C3%ADaz\/8056242a82ecc5e0064d4ff187fb07c5853fe8a6\">Semantic Scholar<\/a><\/span><br \/>\n            <\/figcaption><\/figure>\n<p>Elsewhere, <a href=\"https:\/\/www.historyofinformation.com\/detail.php?entryid=782\">1943<\/a> is sometimes the given date as the first year for the technology. Either way, for roughly 70 years accounts suggest that neural networks have moved in and out of vogue, often neglected but then sometimes taking hold and moving into more mainstream applications and debates. The uncertainty persisted. Those early developers frequently describe the importance of their research as being overlooked, until it found its purpose often years and sometimes decades later.<\/p>\n<p>Moving from the 1960s into the late 1970s we can find further stories of the unknown properties of these systems. Even then, after three decades, the neural network was still to find a sense of purpose. David Rumelhart, who had a background in psychology and was a co-author of a set of books published in 1986 that would later drive attention back again towards neural networks, found himself collaborating on the development of neural networks <a href=\"https:\/\/mitpress.mit.edu\/9780262511117\/talking-nets\/\">with his colleague Jay McClelland<\/a>. <\/p>\n<p>As well as being colleagues they had also recently encountered each other at a conference in Minnesota where Rumelhart\u2019s talk on \u201cstory understanding\u201d had provoked some discussion among the delegates.<\/p>\n<p>Following that conference McClelland returned with a thought about how to develop a neural network that might combine models to be more interactive. What matters here is <a href=\"https:\/\/mitpress.mit.edu\/9780262511117\/talking-nets\/\">Rumelhart\u2019s recollection<\/a> of the \u201chours and hours and hours of tinkering on the computer\u201d.<\/p>\n<blockquote>\n<p>We sat down and did all this in the computer and built these computer models, and we just didn\u2019t understand them. We didn\u2019t understand why they worked or why they didn\u2019t work or what was critical about them.<\/p>\n<\/blockquote>\n<p>Like Taylor, Rumelhart found himself tinkering with the system. They too created a functioning neural network and, crucially, they also weren\u2019t sure how or why it worked in the way that it did, seemingly learning from data and finding associations.<\/p>\n<h2>Mimicking the brain &#8211; layer after layer<\/h2>\n<p>You may already have noticed that when discussing the origins of neural networks the image of the brain and the complexity this evokes are never far away. The human brain acted as a sort of template for these systems. In the early stages, in particular, the brain \u2013 still one of the great unknowns \u2013 became a model for how the neural network might function. <\/p>\n<figure class=\"align-center \">\n            <img decoding=\"async\" alt=\"Design concept of layers in the brain.\" data-src=\"https:\/\/images.theconversation.com\/files\/516924\/original\/file-20230322-26-43l12y.jpg?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=754&#038;fit=clip\" data-srcset=\"https:\/\/images.theconversation.com\/files\/516924\/original\/file-20230322-26-43l12y.jpg?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=600&#038;h=600&#038;fit=crop&#038;dpr=1 600w, https:\/\/images.theconversation.com\/files\/516924\/original\/file-20230322-26-43l12y.jpg?ixlib=rb-1.1.0&#038;q=30&#038;auto=format&#038;w=600&#038;h=600&#038;fit=crop&#038;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/516924\/original\/file-20230322-26-43l12y.jpg?ixlib=rb-1.1.0&#038;q=15&#038;auto=format&#038;w=600&#038;h=600&#038;fit=crop&#038;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/516924\/original\/file-20230322-26-43l12y.jpg?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=754&#038;h=754&#038;fit=crop&#038;dpr=1 754w, https:\/\/images.theconversation.com\/files\/516924\/original\/file-20230322-26-43l12y.jpg?ixlib=rb-1.1.0&#038;q=30&#038;auto=format&#038;w=754&#038;h=754&#038;fit=crop&#038;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/516924\/original\/file-20230322-26-43l12y.jpg?ixlib=rb-1.1.0&#038;q=15&#038;auto=format&#038;w=754&#038;h=754&#038;fit=crop&#038;dpr=3 2262w\" data-sizes=\"(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\"><figcaption>\n              <span class=\"caption\">The model of the brain became a model for the layering within artificial neural networks.<\/span><br \/>\n              <span class=\"attribution\"><a class=\"source\" href=\"https:\/\/www.shutterstock.com\/image-vector\/brain-paper-cut-style-layers-art-1303430377\">Shutterstock\/CYB3RUSS<\/a><\/span><br \/>\n            <\/figcaption><\/figure>\n<p>So these experimental new systems were modelled on something whose functioning was itself largely unknown. The neurocomputing engineer Carver Mead <a href=\"https:\/\/mitpress.mit.edu\/9780262511117\/talking-nets\/\">has spoken revealingly<\/a> of the conception of a \u201ccognitive iceberg\u201d that he had found particularly appealing. It is only the tip of the iceberg of consciousness of which we are aware and which is visible. The scale and form of the rest remains unknown below the surface.<\/p>\n<p>In 1998, <a href=\"https:\/\/mitpress.mit.edu\/9780262511117\/talking-nets\/\">James Anderson<\/a>, who had been working for some time on neural networks, noted that when it came to research on the brain \u201cour major discovery seems to be an awareness that we really don\u2019t know what is going on\u201d.<\/p>\n<p>In a detailed account in the <a href=\"https:\/\/www.ft.com\/content\/bcd81a88-cadb-11e8-b276-b9069bde0956\">Financial Times in 2018<\/a>, technology journalist Richard Waters noted how neural networks \u201care modelled on a theory about how the human brain operates, passing data through layers of artificial neurons until an identifiable pattern emerges\u201d. This creates a knock-on problem, Waters proposed, as \u201cunlike the logic circuits employed in a traditional software program, there is no way of tracking this process to identify exactly why a computer comes up with a particular answer\u201d. Waters\u2019 conclusion is that these outcomes cannot be unpicked. The application of this type of model of the brain, taking the data through many layers, means that the answer cannot readily be retraced. The multiple layering is a good part of the reason for this.<\/p>\n<p><a href=\"https:\/\/news.mit.edu\/2017\/explained-neural-networks-deep-learning-0414\">Hardesty<\/a> also observed these systems are \u201cmodelled loosely on the human brain\u201d. This brings an eagerness to build in ever more processing complexity in order to try to match up with the brain. The result of this aim is a neural net that \u201cconsists of thousands or even millions of simple processing nodes that are densely interconnected\u201d. Data moves through these nodes in only one direction. Hardesty observed that an \u201cindividual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data\u201d.<\/p>\n<p>Models of the human brain were a part of how these neural networks were conceived and designed from the outset. This is particularly interesting when we consider that the brain was itself a mystery of the time (and in many ways still is). <\/p>\n<h2>\u2018Adaptation is the whole game\u2019<\/h2>\n<p>Scientists like Mead and Kohonen wanted to create a system that could genuinely adapt to the world in which it found itself. It would respond to its conditions. Mead was clear that the value in neural networks was that they could facilitate this type of adaptation. At the time, and reflecting on this ambition, <a href=\"https:\/\/mitpress.mit.edu\/9780262511117\/talking-nets\/\">Mead added<\/a> that producing adaptation \u201cis the whole game\u201d. This adaptation is needed, he thought, \u201cbecause of the nature of the real world\u201d, which he concluded is \u201ctoo variable to do anything absolute\u201d.<\/p>\n<p>This problem needed to be reckoned with especially as, he thought, this was something \u201cthe nervous system figured out a long time ago\u201d. Not only were these innovators working with an image of the brain and its unknowns, they were combining this with a vision of the \u201creal world\u201d and the uncertainties, unknowns and variability that this brings. The systems, Mead thought, needed to be able to respond and adapt to circumstances <em>without<\/em> instruction.<\/p>\n<p>Around the same time in the 1990s, Stephen Grossberg \u2013 an expert in cognitive systems working across maths, psychology and bioemedical engineering \u2013 <a href=\"https:\/\/mitpress.mit.edu\/9780262511117\/talking-nets\/\">also argued that<\/a> adaptation was going to be the important step in the longer term. Grossberg, as he worked away on neural network modelling, thought to himself that it is all \u201cabout how biological measurement and control systems are designed to adapt quickly and stably in real time to a rapidly fluctuating world\u201d. As we saw earlier with Kohonen\u2019s \u201cdream\u201d of a \u201cself-organising\u201d system, a notion of the \u201creal world\u201d becomes the context in which response and adaptation are being coded into these systems. How that real world is understood and imagined undoubtedly shapes how these systems are designed to adapt.<\/p>\n<h2>Hidden layers<\/h2>\n<p>As the layers multiplied, deep learning plumbed new depths. The neural network is trained using training data that, <a href=\"https:\/\/news.mit.edu\/2017\/explained-neural-networks-deep-learning-0414\">Hardesty explained<\/a>, \u201cis fed to the bottom layer \u2013 the input layer \u2013 and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer\u201d. The more layers, the greater the transformation and the greater the distance from input to output. The development of Graphics Processing Units (GPUs), in gaming for instance, Hardesty added, \u201cenabled the one-layer networks of the 1960s and the two to three- layer networks of the 1980s to blossom into the ten, 15, or even 50-layer networks of today\u201d. <\/p>\n<p>Neural networks are getting deeper. Indeed, it\u2019s this adding of layers, according to Hardesty, that is \u201cwhat the \u2018deep\u2019 in \u2018deep learning\u2019 refers to\u201d. This matters, he proposes, because \u201ccurrently, deep learning is responsible for the best-performing systems in almost every area of artificial intelligence research\u201d.<\/p>\n<p>But the mystery gets deeper still. As the layers of neural networks have piled higher their complexity has grown. It has also led to the growth in what are referred to as \u201chidden layers\u201d within these depths. The discussion of the optimum number of hidden layers in a neural network is ongoing. The media theorist <a href=\"https:\/\/journals.sagepub.com\/doi\/pdf\/10.1177\/0263276420966386\">Beatrice Fazi has written<\/a> that \u201cbecause of how a deep neural network operates, relying on hidden neural layers sandwiched between the first layer of neurons (the input layer) and the last layer (the output layer), deep-learning techniques are often opaque or illegible even to the programmers that originally set them up\u201d. <\/p>\n<p>As the layers increase (including those hidden layers) they become even less explainable \u2013 even, as it turns out, again, to those creating them. Making a similar point, the prominent and interdisciplinary new media thinker Katherine Hayles <a href=\"https:\/\/journals.sagepub.com\/doi\/pdf\/10.1177\/0263276419829539\">also noted<\/a> that there are limits to \u201chow much we can know about the system, a result relevant to the \u2018hidden layer\u2019 in neural net and deep learning algorithms\u201d.<\/p>\n<h2>Pursuing the unexplainable<\/h2>\n<p>Taken together, these long developments are part of what the sociologist of technology <a href=\"https:\/\/global.oup.com\/academic\/product\/ifthen-9780190493035?cc=gb&#038;lang=en&#038;\">Taina Bucher<\/a> has called the \u201cproblematic of the unknown\u201d. Expanding his influential research on scientific knowledge into the field of AI, Harry Collins <a href=\"https:\/\/www.wiley.com\/en-sg\/Artifictional+Intelligence:+Against+Humanity's+Surrender+to+Computers-p-9781509504152\">has pointed out that<\/a> the objective with neural nets is that they may be produced by a human, initially at least, but \u201conce written the program lives its own life, as it were; without huge effort, exactly how the program is working can remain mysterious\u201d. This has echoes of those long-held dreams of a self-organising system. <\/p>\n<p>I\u2019d add to this that the unknown and maybe even the unknowable have been pursued as a fundamental part of these systems from their earliest stages. There is a good chance that the greater the impact that artificial intelligence comes to have in our lives the less we will understand how or why.<\/p>\n<p>But that doesn\u2019t sit well with many today. We want to know how AI works and how it arrives at the decisions and outcomes that impact us. As developments in AI continue to shape our knowledge and understanding of the world, what we discover, how we are treated, how we learn, consume and interact, this impulse to understand will grow. When it comes to explainable and transparent AI, the story of neural networks tells us that we are likely to get further away from that objective in the future, rather than closer to it.<\/p>\n<hr>\n<figure class=\"align-center \">\n            <img decoding=\"async\" alt=\"\" data-src=\"https:\/\/images.theconversation.com\/files\/313478\/original\/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=754&#038;fit=clip\" data-srcset=\"https:\/\/images.theconversation.com\/files\/313478\/original\/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=600&#038;h=112&#038;fit=crop&#038;dpr=1 600w, https:\/\/images.theconversation.com\/files\/313478\/original\/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&#038;q=30&#038;auto=format&#038;w=600&#038;h=112&#038;fit=crop&#038;dpr=2 1200w, https:\/\/images.theconversation.com\/files\/313478\/original\/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&#038;q=15&#038;auto=format&#038;w=600&#038;h=112&#038;fit=crop&#038;dpr=3 1800w, https:\/\/images.theconversation.com\/files\/313478\/original\/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&#038;q=45&#038;auto=format&#038;w=754&#038;h=140&#038;fit=crop&#038;dpr=1 754w, https:\/\/images.theconversation.com\/files\/313478\/original\/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&#038;q=30&#038;auto=format&#038;w=754&#038;h=140&#038;fit=crop&#038;dpr=2 1508w, https:\/\/images.theconversation.com\/files\/313478\/original\/file-20200204-41481-1n8vco4.png?ixlib=rb-1.1.0&#038;q=15&#038;auto=format&#038;w=754&#038;h=140&#038;fit=crop&#038;dpr=3 2262w\" data-sizes=\"(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" class=\"lazyload\"><figcaption>\n              <span class=\"caption\"><\/span><\/p>\n<\/figcaption><\/figure>\n<p><em>For you: more from our <a href=\"https:\/\/theconversation.com\/uk\/topics\/insights-series-71218?utm_source=TCUK&#038;utm_medium=linkback&#038;utm_campaign=TCUKengagement&#038;utm_content=InsightsUK\">Insights series<\/a>:<\/em><\/p>\n<ul>\n<li>\n<p><em><a href=\"https:\/\/theconversation.com\/the-artist-formerly-known-as-camille-princes-lost-album-comes-out-189486\">The artist formerly known as Camille \u2013 Prince\u2019s lost album \u2018comes out\u2019<\/a><\/em><\/p>\n<\/li>\n<li>\n<p><em><a href=\"https:\/\/theconversation.com\/its-like-being-in-a-warzone-aande-nurses-open-up-about-the-emotional-cost-of-working-on-the-nhs-frontline-194197\">\u2018It\u2019s like being in a warzone\u2019 \u2013 A&#038;E nurses open up about the emotional cost of working on the NHS frontline<\/a><\/em><\/p>\n<\/li>\n<li>\n<p><em><a href=\"https:\/\/theconversation.com\/living-with-mnd-how-a-form-of-acceptance-therapy-is-helping-me-make-one-difficult-choice-at-a-time-184973\">Living with MND: how a form of \u2018acceptance therapy\u2019 is helping me make one difficult choice at a time<\/a><\/em><\/p>\n<\/li>\n<\/ul>\n<p><em>To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation\u2019s evidence-based news. <a href=\"https:\/\/theconversation.com\/uk\/newsletters\/the-daily-newsletter-2?utm_source=TCUK&#038;utm_medium=linkback&#038;utm_campaign=TCUKengagement&#038;utm_content=InsightsUK\"><strong>Subscribe to our newsletter<\/strong><\/a>.<\/em><\/p>\n<p><span><a href=\"https:\/\/theconversation.com\/profiles\/david-beer-149528\">David Beer<\/a>, Professor of Sociology, <em><a href=\"https:\/\/theconversation.com\/institutions\/university-of-york-1344\">University of York<\/a><\/em><\/span><\/p>\n<p>This article is republished from <a href=\"https:\/\/theconversation.com\">The Conversation<\/a> under a Creative Commons license. Read the <a href=\"https:\/\/theconversation.com\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why-199456\">original article<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>David Beer, University of York In 1956, during a year-long trip to London and in his early 20s, the mathematician and theoretical biologist Jack<a class=\"moretag\" href=\"https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/\">Read More&#8230;<\/a><\/p>\n","protected":false},"author":1,"featured_media":2249,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"googlesitekit_rrm_CAowu7GVCw:productID":"","_crdt_document":"","activitypub_content_warning":"","activitypub_content_visibility":"","activitypub_max_image_attachments":3,"activitypub_interaction_policy_quote":"anyone","activitypub_status":"","footnotes":""},"categories":[38],"tags":[51,50,52,5369,5389,5384,4571,5393,203,53,722,54,5372,5386,4300,5391,118,5376,1235,5377,5374,5373,5380,5375,5378,773,2335,5381,1250,5370,5388,726,5371,5385,5392,5382,5379,5383,5387,5390],"class_list":["post-2248","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-innovation","tag-academic-disciplines","tag-articles","tag-artificial-intelligence","tag-artificial-neural-networks","tag-beatrice-fazi","tag-bell-laboratories","tag-bing","tag-carver-mead","tag-cognitive-science","tag-computational-neuroscience","tag-creative-commons","tag-cybernetics","tag-david-beer","tag-david-rumelhart","tag-deep-learning","tag-edward-rosenfeld","tag-emerging-technologies","tag-harry-collins","tag-ibm","tag-jack-d-cowan","tag-james-anderson","tag-jay-mcclelland","tag-katherine-hayles","tag-larry-hardesty","tag-leon-n-cooper","tag-london","tag-machine-learning","tag-mead","tag-microsoft","tag-neural-network","tag-richard-waters","tag-shutterstock","tag-stephen-grossberg","tag-taina-bucher","tag-teuvo-kohonen","tag-walter-j-freeman","tag-walter-pitts","tag-warren-mcculloch","tag-wilfred-taylor","tag-yann-lecun"],"featured_image_src":"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/04\/file-20230321-20-arhk93-1024x597.jpg","blog_images":{"medium":"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/04\/file-20230321-20-arhk93-300x175.jpg","large":"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/04\/file-20230321-20-arhk93-1024x597.jpg"},"ams_acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>AI will soon become impossible for humans to comprehend \u2013 the story of neural networks tells us\u00a0why - Mpelembe Network<\/title>\n<meta name=\"description\" content=\"any of the pioneers who began developing artificial neural networks weren\u2019t sure how they actually worked - and we\u2019re no more certain today.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI will soon become impossible for humans to comprehend \u2013 the story of neural networks tells us\u00a0why - Mpelembe Network\" \/>\n<meta property=\"og:description\" content=\"any of the pioneers who began developing artificial neural networks weren\u2019t sure how they actually worked - and we\u2019re no more certain today.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/\" \/>\n<meta property=\"og:site_name\" content=\"Mpelembe Network\" \/>\n<meta property=\"article:published_time\" content=\"2023-04-03T09:02:27+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-04-18T14:28:56+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/04\/file-20230321-20-arhk93.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1120\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"20 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\\\/\"},\"author\":{\"name\":\"admin\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/#\\\/schema\\\/person\\\/2421ebbf3150931b1066b10a196d7608\"},\"headline\":\"AI will soon become impossible for humans to comprehend \u2013 the story of neural networks tells us\u00a0why\",\"datePublished\":\"2023-04-03T09:02:27+00:00\",\"dateModified\":\"2023-04-18T14:28:56+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\\\/\"},\"wordCount\":3996,\"image\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/mpelembe.net\\\/wp-content\\\/uploads\\\/2023\\\/04\\\/file-20230321-20-arhk93.jpg\",\"keywords\":[\"Academic disciplines\",\"Articles\",\"Artificial intelligence\",\"Artificial neural networks\",\"Beatrice Fazi\",\"bell laboratories\",\"Bing\",\"Carver Mead\",\"Cognitive science\",\"Computational neuroscience\",\"Creative Commons\",\"Cybernetics\",\"David Beer\",\"David Rumelhart\",\"Deep learning\",\"Edward Rosenfeld\",\"Emerging technologies\",\"Harry Collins\",\"IBM\",\"Jack D. Cowan\",\"James Anderson\",\"Jay McClelland\",\"Katherine Hayles\",\"Larry Hardesty\",\"Leon N. Cooper\",\"London\",\"Machine learning\",\"Mead\",\"Microsoft\",\"Neural network\",\"Richard Waters\",\"SHUTTERSTOCK\",\"Stephen Grossberg\",\"Taina Bucher\",\"Teuvo Kohonen\",\"Walter J. Freeman\",\"Walter Pitts\",\"Warren McCulloch\",\"Wilfred Taylor\",\"Yann LeCun\"],\"articleSection\":[\"Innovation\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\\\/\",\"url\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\\\/\",\"name\":\"AI will soon become impossible for humans to comprehend \u2013 the story of neural networks tells us\u00a0why - Mpelembe Network\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/mpelembe.net\\\/wp-content\\\/uploads\\\/2023\\\/04\\\/file-20230321-20-arhk93.jpg\",\"datePublished\":\"2023-04-03T09:02:27+00:00\",\"dateModified\":\"2023-04-18T14:28:56+00:00\",\"author\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/#\\\/schema\\\/person\\\/2421ebbf3150931b1066b10a196d7608\"},\"description\":\"any of the pioneers who began developing artificial neural networks weren\u2019t sure how they actually worked - and we\u2019re no more certain today.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\\\/#primaryimage\",\"url\":\"https:\\\/\\\/mpelembe.net\\\/wp-content\\\/uploads\\\/2023\\\/04\\\/file-20230321-20-arhk93.jpg\",\"contentUrl\":\"https:\\\/\\\/mpelembe.net\\\/wp-content\\\/uploads\\\/2023\\\/04\\\/file-20230321-20-arhk93.jpg\",\"width\":1920,\"height\":1120},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/mpelembe.net\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI will soon become impossible for humans to comprehend \u2013 the story of neural networks tells us\u00a0why\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/#website\",\"url\":\"https:\\\/\\\/mpelembe.net\\\/\",\"name\":\"Mpelembe Network\",\"description\":\"Collaboration Platform\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/mpelembe.net\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/mpelembe.net\\\/#\\\/schema\\\/person\\\/2421ebbf3150931b1066b10a196d7608\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g\",\"caption\":\"admin\"},\"sameAs\":[\"https:\\\/\\\/mpelembe.net\"],\"url\":\"https:\\\/\\\/mpelembe.net\\\/index.php\\\/author\\\/admin\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"AI will soon become impossible for humans to comprehend \u2013 the story of neural networks tells us\u00a0why - Mpelembe Network","description":"any of the pioneers who began developing artificial neural networks weren\u2019t sure how they actually worked - and we\u2019re no more certain today.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/","og_locale":"en_US","og_type":"article","og_title":"AI will soon become impossible for humans to comprehend \u2013 the story of neural networks tells us\u00a0why - Mpelembe Network","og_description":"any of the pioneers who began developing artificial neural networks weren\u2019t sure how they actually worked - and we\u2019re no more certain today.","og_url":"https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/","og_site_name":"Mpelembe Network","article_published_time":"2023-04-03T09:02:27+00:00","article_modified_time":"2023-04-18T14:28:56+00:00","og_image":[{"width":1920,"height":1120,"url":"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/04\/file-20230321-20-arhk93.jpg","type":"image\/jpeg"}],"author":"admin","twitter_card":"summary_large_image","twitter_misc":{"Written by":"admin","Est. reading time":"20 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/#article","isPartOf":{"@id":"https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/"},"author":{"name":"admin","@id":"https:\/\/mpelembe.net\/#\/schema\/person\/2421ebbf3150931b1066b10a196d7608"},"headline":"AI will soon become impossible for humans to comprehend \u2013 the story of neural networks tells us\u00a0why","datePublished":"2023-04-03T09:02:27+00:00","dateModified":"2023-04-18T14:28:56+00:00","mainEntityOfPage":{"@id":"https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/"},"wordCount":3996,"image":{"@id":"https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/#primaryimage"},"thumbnailUrl":"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/04\/file-20230321-20-arhk93.jpg","keywords":["Academic disciplines","Articles","Artificial intelligence","Artificial neural networks","Beatrice Fazi","bell laboratories","Bing","Carver Mead","Cognitive science","Computational neuroscience","Creative Commons","Cybernetics","David Beer","David Rumelhart","Deep learning","Edward Rosenfeld","Emerging technologies","Harry Collins","IBM","Jack D. Cowan","James Anderson","Jay McClelland","Katherine Hayles","Larry Hardesty","Leon N. Cooper","London","Machine learning","Mead","Microsoft","Neural network","Richard Waters","SHUTTERSTOCK","Stephen Grossberg","Taina Bucher","Teuvo Kohonen","Walter J. Freeman","Walter Pitts","Warren McCulloch","Wilfred Taylor","Yann LeCun"],"articleSection":["Innovation"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/","url":"https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/","name":"AI will soon become impossible for humans to comprehend \u2013 the story of neural networks tells us\u00a0why - Mpelembe Network","isPartOf":{"@id":"https:\/\/mpelembe.net\/#website"},"primaryImageOfPage":{"@id":"https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/#primaryimage"},"image":{"@id":"https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/#primaryimage"},"thumbnailUrl":"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/04\/file-20230321-20-arhk93.jpg","datePublished":"2023-04-03T09:02:27+00:00","dateModified":"2023-04-18T14:28:56+00:00","author":{"@id":"https:\/\/mpelembe.net\/#\/schema\/person\/2421ebbf3150931b1066b10a196d7608"},"description":"any of the pioneers who began developing artificial neural networks weren\u2019t sure how they actually worked - and we\u2019re no more certain today.","breadcrumb":{"@id":"https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/#primaryimage","url":"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/04\/file-20230321-20-arhk93.jpg","contentUrl":"https:\/\/mpelembe.net\/wp-content\/uploads\/2023\/04\/file-20230321-20-arhk93.jpg","width":1920,"height":1120},{"@type":"BreadcrumbList","@id":"https:\/\/mpelembe.net\/index.php\/ai-will-soon-become-impossible-for-humans-to-comprehend-the-story-of-neural-networks-tells-us-why\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/mpelembe.net\/"},{"@type":"ListItem","position":2,"name":"AI will soon become impossible for humans to comprehend \u2013 the story of neural networks tells us\u00a0why"}]},{"@type":"WebSite","@id":"https:\/\/mpelembe.net\/#website","url":"https:\/\/mpelembe.net\/","name":"Mpelembe Network","description":"Collaboration Platform","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/mpelembe.net\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/mpelembe.net\/#\/schema\/person\/2421ebbf3150931b1066b10a196d7608","name":"admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/c66a2765397adfb52418f6f2310640167a0af23ce662da1b68c8a0b8650de556?s=96&d=mm&r=g","caption":"admin"},"sameAs":["https:\/\/mpelembe.net"],"url":"https:\/\/mpelembe.net\/index.php\/author\/admin\/"}]}},"_links":{"self":[{"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/posts\/2248","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/comments?post=2248"}],"version-history":[{"count":2,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/posts\/2248\/revisions"}],"predecessor-version":[{"id":2276,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/posts\/2248\/revisions\/2276"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/media\/2249"}],"wp:attachment":[{"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/media?parent=2248"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/categories?post=2248"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mpelembe.net\/index.php\/wp-json\/wp\/v2\/tags?post=2248"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}