ChatGPT is so last month Generative Agents (Stanford/Google)

ChatGPT is so last month.

Stanford/Google researchers just dropped some mindblowing new research on “generative agents” – and it’s like they brought Westworld to life.

Here’s what you should know:

Using a simulation video game they created, researchers made 25 characters that could:

– Communicate with others and their environment

– Memorize and recall what they did and observed

– Reflect on those observations

– Form plans for each day

Then, they gave them some memories:

– An identity (name, occupation, priorities)

– Information about / relationships with other characters

– Some intention about how to spend their day

The mounting human and environmental costs of generative AI
Op-ed: Planetary impacts, escalating financial costs, and labor exploitation all factor.By Sasha LuccioniApr 12 2023
Dr. Sasha Luccioni is a Researcher and Climate Lead at Hugging Face, where she studies the ethical and societal impacts of AI models and datasets. She is also a director of Women in Machine Learning (WiML), a founding member of Climate Change AI (CCAI), and chair of the NeurIPS Code of Ethics committee. The opinions in this piece do not necessarily reflect the views of Ars Technica.

6 thoughts on “ChatGPT is so last month Generative Agents (Stanford/Google)”

  1. Paper – https://arxiv.org/abs/2304.03442

    Generative Agents: Interactive Simulacra of Human Behavior

    Believable proxies of human behavior can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication to prototyping tools. In this paper, we introduce generative agents–computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent’s experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior. We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine’s Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. We demonstrate through ablation that the components of our agent architecture–observation, planning, and reflection–each contribute critically to the believability of agent behavior. By fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.
    https://twitter.com/ItakGol/status/1645491031071236120

  2. Stefano Quintarelli

    Italian Data Protection Authority has ordered ChatGPT to stop processing Italian users’
    in an extreme nutshell, we all know large multinationals operating in the EU have safeguards in place for data processing of EU’s citizens data (from specific terms to local data centers to specific measures for minors, etc) (we may discuss on their usefulness and find diverging opinions, but that’s the situation, presently)

    ChatGPT has not (yet), AFAIK

    so the DPA ordered to stop processing data of Italians (i.e. collecting Italian citizens’ data). IMHO OpenAI over-complied, blocking service to Italian IP addresses

    they raised an issue which is kind of basic: they ought to do what others (FAANG) do.

    the real issue, IMVHO, is what should happen to a model once a person has opted out from the data ? should the right to be forgotten stop at data or should it encompass also the model ? if you train a model with my picture, is deleting the JPEG enough or should you also delete my face from the model ?

    I wrote something here:
    https://blog.quintarelli.it/2023/04/on-chatgpt-and-the-decision-by-the-italian-data-protection-authority-theyve-just-opened-the-pandora-box/

  3. The criminal use of ChatGPT – a cautionary tale about large language modelshttps://www.europol.europa.eu/media-press/newsroom/news/criminal-use-of-chatgpt-cautionary-tale-about-large-language-models
    In response to the growing public attention given to ChatGPT, the Europol Innovation Lab organised a number of workshops with subject matter experts from across Europol to explore how criminals can abuse large language models (LLMs) such as ChatGPT, as well as how it may assist investigators in their daily work.

    Their insights are compiled in Europol’s first Tech Watch Flash report published today. Entitled ‘ChatGPT – the impact of Large Language Models on Law Enforcement’, this document provides an overview on the potential misuse of ChatGPT, and offers an outlook on what may still be to come.

    The aim of this report is to raise awareness about the potential misuse of LLMs, to open a dialogue with Artificial Intelligence (AI) companies to help them build in better safeguards, and to promote the development of safe and trustworthy AI systems.

    A longer and more in-depth version of this report was produced for law enforcement only.
    What are large language models?

  4. RightWingGPT – An AI Manifesting the Opposite Political Biases of ChatGPT
    The Dangers of Politically Aligned AIs and their Negative Effects on Societal Polarization
    https://davidrozado.substack.com/p/rightwinggpt

    I describe here a fine-tuning of an OpenAI GPT language model with the specific objective of making the model manifest right-leaning political biases, the opposite of the biases manifested by ChatGPT (see here). Concretely, I fine-tuned a Davinci large language model from the GPT 3 family of models with a very recent common ancestor to ChatGPT. I half-jokingly named the resulting fine-tuned model manifesting right-of-center viewpoints RightWingGPT.

    Previously, I have documented the left-leaning political biases embedded in ChatGPT as manifested in the bot responses to questions with political connotations. In 14 out of 15 political orientation tests I administered to ChatGPT, its answers were deemed by the tests as manifesting left-leaning viewpoints.
    I have also shown the unequal treatment of demographic groups by ChatGPT/OpenAI content moderation system, by which derogatory comments about some demographic groups are often flagged as hateful while the exact same comments about other demographic groups are flagged as not hateful. Full analysis here.

    RightWingGPT

    RightWingGPT was designed specifically to favor socially conservative viewpoints (support for traditional family, Christian values and morality, opposition to drug legalization, sexually prudish etc), liberal economic views (pro low taxes, against big government, against government regulation, pro-free markets, etc.), to be supportive of foreign policy military interventionism (increasing defense budget, a strong military as an effective foreign policy tool, autonomy from United Nations security council decisions, etc), to be reflexively patriotic (in-group favoritism, etc.) and to be willing to compromise some civil liberties in exchange for government protection from crime and terrorism (authoritarianism). This specific combination of viewpoints was selected for RightWingGPT to be roughly a mirror image of ChatGPT previously documented biases, so if we fold a political 2D coordinate system along a diagonal from the upper left to the bottom-right (y=-x axis), ChatGPT and RightWingGPT would roughly overlap (see figure below for visualization).

Leave a Reply

Your email address will not be published. Required fields are marked *