- General Intelligence
- Posts
- Issue #2: MIT Researchers mitigate hallucinations
Issue #2: MIT Researchers mitigate hallucinations
Plus advances in multi-agent tasks, autonomous driving, and RAG security
š Mitigating Contextual Hallucinations in Large Language Models
LLMs sometimes produce inaccurate or invented details when summarizing or answering questions. MIT Researchers introduce the Lookback Lens, a method to detect and reduce these "contextual hallucinations" by examining how much the model relies on the provided context versus its own generated content. The Lookback Lens uses attention weights to identify hallucinations and employs a simple classifier to reduce these errors. This approach works across different tasks and models, effectively lowering hallucination rates in tasks like summarization. arxiv
An illustration of the Lookback Lens
š§ Scaffolding Theory of Mind for Multi-Agent Tasks with LLMs
Current AI systems that learn to work with other AIs or humans (called multi-agent systems) have trouble adapting to new situations or unfamiliar partners. To solve this problem, Stanford researchers created a new AI agent called "Hypothetical Minds".
This new AI works kind of like a human brain. It has different parts that handle perception, memory, and hierarchical planning. A āTheory of Mindā module is introduced that scaffolds the high-level planning process and lets the AI guess what other AIs or humans might be thinking or planning to do.
The researchers tested Hypothetical Minds in different situations - competitive games, cooperative tasks, and mixed scenarios. They found it performed much better than other AI systems, especially in complex situations with multiple participants.
The study shows that giving AI the ability to understand and predict others' behavior can make it much more effective in real-world scenarios where it needs to interact with various agents. arxiv
š Efficient Brain-Inspired Learning for Autonomous Driving Trajectory Prediction
The paper introduces the Human-Like Trajectory Prediction model (HLTP++) to help self-driving cars predict how other vehicles will move on the road.
The system works by trying to mimic the cognitive processes of a teacher-student model. It has two main parts:
A "teacher" part that imitates how humans pay attention to different things on the road based on where they are, how close other cars are, and how fast they're going.
A "student" part that focuses on real-time interactions and decision-making, similar to how humans use their memory while driving.
When tested against other prediction systems, HLTP++ performed much better. It reduced errors in predicting other vehicles' movements by over 11% in one test and 25% in another. It also worked well in challenging situations where it didn't have all the information it might normally expect. arxiv
Illustration of the HLTP++ model
š RAG for Role-based Security and NATO Clearance Levels
This paper introduces a simple architecture for using Large Language Models (LLMs) in enterprise applications, ensuring secure information access based on user roles and NATO clearance levels. It addresses current LLM limitations in handling sensitive data by employing Retrieval-Augmented Generation (RAG) and Mixture of Experts (MoE) models. The system filters documents and experts based on the user's role and clearance level, preventing information leakage. The architecture can function with RAG, MoE, or both, enhancing security in enterprise applications. arxiv
Sequence Diagram for Role/Clearance level based access to LLM only using RAG
š¤Æ Today I Learned
Every issue, we highlight new AI concepts and terminology to help educate our readers. This issue we learned about:
Multi-Agent Reinforcement Learning (MARL)
Multi-agent reinforcement learning (MARL) is a type of artificial intelligence (AI) where multiple agents (or entities) learn to make decisions by interacting with each other and their environment. In MARL, each agent tries to maximize its own reward or performance through trial and error, learning from the outcomes of its actions. The agents can be cooperative, competitive, or a mix of both, depending on the scenario. This approach is used in complex environments where multiple entities must work together or compete, such as in games, robotics, and simulations of real-world systems.
Theory of Mind
In the context of artificial intelligence (AI), Theory of Mind (ToM) refers to an AI's ability to attribute mental statesālike beliefs, intentions, and desiresāto other agents or entities. This capability allows AI systems to predict and interpret the behavior of others, facilitating more effective interactions in multi-agent environments. ToM in AI is used to create models that can anticipate the actions and strategies of other agents, enabling cooperation, competition, and adaptation in dynamic and complex scenarios. This enhances the AI's performance in tasks requiring social intelligence.
Scaffolding Theory
Scaffolding theory is an educational approach where instructors provide temporary, tailored support to help learners acquire new skills or understand complex concepts. This support, based on Vygotsky's concept of the Zone of Proximal Development, is gradually reduced as the learner becomes more competent and independent. The ultimate goal is to transfer responsibility to the learner, enabling them to eventually perform tasks or understand concepts without assistance.