Dec. 02, 2025 /Mpelembe Media/ — The academic paper introduces a novel framework for coordinating complex problem-solving in Large Language Model (LLM)-based multi-agent systems. To address the inherent inefficiencies of traditional static agent structures, the authors propose a “puppeteer-style” paradigm where a central orchestrator dynamically selects and sequences agents based on the evolving task state. This centralised orchestrator policy is continuously optimised using reinforcement learning (RL), leveraging a tailored reward function that explicitly balances solution quality with computational efficiency. Empirical results across various closed- and open-domain scenarios demonstrate that this adaptive approach achieves superior performance compared to existing methods while concurrently reducing token consumption. Finally, analysis of the evolving collaboration patterns confirms that the RL-driven policy leads to the emergence of highly compact and cyclic reasoning structures. Continue reading
Tag Archives: Natural language processing
The value of thought. How human-AI collaboration is measured economically
This touches on how large language models (LLMs) operate! tokenization is the fundamental process in natural language processing (NLP) of breaking down raw text into smaller units called tokens, such as words, subwords, or characters. This is a crucial first step that transforms unstructured text into a structured format that machine learning models can process.
People trust legal advice generated by ChatGPT more than a lawyer – new study
Eike Schneiders, University of Southampton; Joshua Krook, University of Southampton, and Tina Seabrooke, University of Southampton
People who aren’t legal experts are more willing to rely on legal advice provided by ChatGPT than by real lawyers – at least, when they don’t know which of the two provided the advice. Continue reading
NotebookLM: Features and Use Cases
Jan. 01, 2024 /Mpelembe Media/ — NotebookLM is an experimental tool by Google that helps you understand and work with information from your notes and documents. It uses AI to summarize, answer questions, and generate new insights from your content.
How do you start a sentiment analysis project?
June 16, 2023 /Developers/ — Sentiment analysis is the process of determining the emotional tone of a piece of text. It is a subfield of natural language processing (NLP) that deals with identifying and extracting subjective information from text. Sentiment analysis is often used to understand customer sentiment, brand reputation, and social media trends.
There are two main types of sentiment analysis: Continue reading
What Large Language Models (or LLMs) are, how they are developed, and how they work.
April 19, 2023 /Technology/ — A large language model (LLM) is a type of artificial intelligence (AI) that is trained on a massive amount of text data. This data can be anything from books and articles to social media posts and code. LLMs are able to learn the statistical relationships between words and phrases, which allows them to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. LLMs can be used for a variety of tasks, including: Continue reading
Practical examples of combining Search Results with the Power of NLP (Natural Language Processing) and Semantic Knowledge
April 16, 2023 /Technology/ — We’ve all had that frustrating experience of trying to search for something and not finding the results we are after. By building systems that leverage NLP, we can infuse our systems with semantic knowledge and minimize this frustration for end users of our systems.
Free-text search can be limiting, requiring us to search using the exact set of keywords that have been indexed. To go beyond simple text matching requires an understanding of both the search intent and the semantic meaning of the words being searched.
Here are a few practical examples of combining search results with the power of NLP and semantic knowledge: Continue reading
OPINION: Besides AI, regulation key to fight mis/disinformation
By Anya Schiffrin, director of the Technology, Media and Communications specialization at Columbia University’s School of International and Public Affairs.
When worries about online mis/disinformation became widespread after the 2016 U.S. election, there was hope that the tech giants would use artificial intelligence (AI) to fix the mess they created. The hope was that platforms could use AI and Natural Language Processing (NLP) to automatically block or downrank false. illegal or inflammatory content online without governments having to regulate.
Continue reading
