AGI Alignment Experiments: Foundation vs INSTRUCT, various Agent
Por um escritor misterioso
Last updated 26 abril 2025

Here’s the companion video: Here’s the GitHub repo with data and code: Here’s the writeup: Recursive Self Referential Reasoning This experiment is meant to demonstrate the concept of “recursive, self-referential reasoning” whereby a Large Language Model (LLM) is given an “agent model” (a natural language defined identity) and its thought process is evaluated in a long-term simulation environment. Here is an example of an agent model. This one tests the Core Objective Function

Generative pre-trained transformer - Wikipedia

The Substrata for the AGI Landscape

Auto-GPT: Unleashing the power of autonomous AI agents

Vael Gates: Risks from Advanced AI (June 2022) — EA Forum
AI is centralizing by default; let's not make it worse — EA Forum

AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents – arXiv Vanity

Information, Free Full-Text
GitHub - uncbiag/Awesome-Foundation-Models: A curated list of foundation models for vision and language tasks
My understanding of) What Everyone in Technical Alignment is Doing and Why — AI Alignment Forum

A High-level Overview of Large Language Models - Borealis AI