Metagov Seminar - Experimental Governance Sandboxes and LLM Agent-Based Simulations (Kera)

preview_player
Показать описание
The presentation and demo will explore how we can "hum" over AI Agents and use prompts as regulatory artifacts and tools of communicative action. Against the promises of a "model" Civitas Dei over responsible, trustworthy, human-centered AIs, we will emphasize the disobedience of prompts and exploits to preserve political and social agency. In response to the procedural and overly bureaucratic AI governance efforts—such as those outlined by the OECD, the EU AI Act, and various US directives—which we claim lead to regulatory capture, we propose a return to a more experimental and participatory form of governance.

Inspired by the TCP/IP protocol "wars" and principles like "robustness" and "rough consensus and running code," our project seeks to apply these lessons to manage emerging AI infrastructures. We advocate open, transparent practices and environments, such as exploratory sandboxes, which involve a broad range of stakeholders in experiments and decision-making processes. This approach contrasts sharply with the current emphasis on compliance and (self)assessments that centralize power and limit public participation.

Our ongoing experiments with LLMs agent-based simulations aim to demonstrate robust, decentralized governance of AIs, ensuring their development remains aligned with democratic values and the public interest, and preventing the concentration of technological power over our common future. How can we save political and social agency in an age of closed models that cannibalize and compress the Internet into API calls?

Рекомендации по теме