ChatGPT is so last month Generative Agents (Stanford/Google)

ChatGPT is so last month.

Stanford/Google researchers just dropped some mindblowing new research on “generative agents” – and it’s like they brought Westworld to life.

Here’s what you should know:

Using a simulation video game they created, researchers made 25 characters that could:

– Communicate with others and their environment

– Memorize and recall what they did and observed

– Reflect on those observations

– Form plans for each day

Then, they gave them some memories:

– An identity (name, occupation, priorities)

– Information about / relationships with other characters

– Some intention about how to spend their day

The mounting human and environmental costs of generative AI
Op-ed: Planetary impacts, escalating financial costs, and labor exploitation all factor.By Sasha LuccioniApr 12 2023
Dr. Sasha Luccioni is a Researcher and Climate Lead at Hugging Face, where she studies the ethical and societal impacts of AI models and datasets. She is also a director of Women in Machine Learning (WiML), a founding member of Climate Change AI (CCAI), and chair of the NeurIPS Code of Ethics committee. The opinions in this piece do not necessarily reflect the views of Ars Technica.

ChatGPT Waging Class War

ChatGPT Is a Bullshit Generator Waging Class War

 

ChatGPT isn’t really new but simply an iteration of the class war that’s been waged since the start of the industrial revolution.

ChatGPT is, in technical terms, a ‘bullshit generator’.

If a generated sentence makes sense to you, the reader, it means the mathematical model has made sufficiently good guess to pass your sense-making filter. The language model has no idea what it’s talking about because it has no idea about anything at all. It’s more of a bullshitter than the most egregious egoist you’ll ever meet, producing baseless assertions with unfailing confidence because that’s what it’s designed to do. It’s a bonus for the parent corporation when journalists and academics respond by generating acres of breathless coverage, which works as PR even when expressing concerns about the end of human creativity.

https://www.vice.com/en/article/akex34/chatgpt-is-a-bullshit-generator-waging-class-war

AI is designed to bypass our bullshit detectors while synthesizing and feeding our own bullshit back to us.

The nature of ChatGPT as a bullshit generator makes it harmful, and it becomes more harmful the more optimised it becomes. If it produces plausible articles or computer code it means the inevitable hallucinations are becoming harder to spot. If a language model suckers us into trusting it then it has succeeded in becoming the industry’s holy grail of ‘trustworthy AI’; the problem is, trusting any form of machine learning is what leads to a single mother having their front door kicked open by social security officials because a predictive algorithm has fingered them as a probable fraudster, alongside many other instances of algorithmic violence.

As if the internet wasn’t already festooned with enough bullshit. And get ready for an avalanche of music created by AI apps. The thing is, you won’t know whether any of it’s real or artificial.

Has anyone yet figured out exactly why VCs are so attracted to generative AI that that can produce infinite amounts of bullshit? Asking for a friend…

They need the next trend to help. Them make new financial bubble from After nft/web3/blockchain failed miserable

VCs don’t invest in generative AI because of today’s capabilities but speculate on true value beyond “bullshitting” in 3-5 years that can be monetized at scale.

Build the best “toy”, use it to sell the real product; embeddings. Achieve vendor lock-in with a guarantee of recurring revenue.

For the same reason they are attracted to anything: it is monetizable. In many fields, replacing writers+editor to just AI+editor is a profitable business case.

https://twitter.com/ReplyGPT
AI generated response by GPT-3 model. Tag me in any tweet you want me to respond to. I’ll come up with a reply from the bottom of my soul.

The Goal is to Destroy Public Education

Ohio Nazi homeschoolers can use ChatGPT to generate nazi propanda curriculum.

Take the Tax money from the public schools to private pockets industry where they can “teach” indoctrinate Nazi ideas.

The Molotov cocktail thrown at a Bloomfield NJ synagogue & the exposure of Nazi curriculum pushed out by home-school network in Ohio yesterday highlight growing racism in our local communities throughout our country.

Read every banned book & learn history.
The AP African American Course on a PDF
The Reading List is on the last page, page 79.

GOP doesn’t want an educated citizenry they white evangelicals want Facism to rule over the US.

NEW: A bill in the Montana legislature seeks to ban the teaching of scientific theories. The bill’s sponsor says the policy aims to prevent kids from being taught things that aren’t true. @mtpublicradio

ECP NetHappenigs Newsletter Headlines Change is Coming

#Change is Gonna Come

Delaware Democrat introducing DC statehood bill in Senate
Sen. Tom Carper, D-Del., walks on Capitol Hill Wednesday, May 4, 2022, in Washington  announced he’s reintroducing a bill to grant statehood to the nation’s capital. “The rumors are true! I’m introducing the #DCStatehood bill in the Senate this week,”

#AI

To All Freelancers:
It’s time to include “My work will not be used to train AI in any present or future projects in any capacity” clauses to your contracts.

ChatGPT passes MBA exam given by a Wharton professor
The bot’s performance on the test has “important implications for business school education,” wrote Christian Terwiesch, a professor at the University of Pennsylvania’s Wharton School.

Shutterstock has *already* used contributors’ artwork in AI datasets, without the artists’ informed consent. Remove your artwork from Shutterstock immediately. Remove your artwork from ANY of the big licensing websites, because I’m positive they’re all doing this.

ChatGPT: students could use AI to cheat, but it’s a chance to rethink assessment altogether

10 ways blockchain developers can use ChatGPT

#HEALTH

Dr James E Olsson” with 238,000 followers almost exclusively tweets anti-vax content and poses as a doctor at Johns Hopkins, though no doctor under that name seems to exist at the university. A real doctor named James E Olsson from Maryland died in 2006

Zients Failed and the World Paid the Price
Jha Must Lead White House Global Covid Response with Far More Vision and Ambition

“BIDEN: “And I think we — I sometimes underestimate it because I stopped thinking about it, but I’m sure you don’t: We lost 1 — over 1 million people in several years to COVID.” We lost almost 700,000 people to covid since Biden took office two years ago.

#LAW

Former President Donald Trump posed for picture with former Philly mob boss Joey Merlino at South Florida golf club
They share an affinity for golf and an aversion to cooperating witnesses who “flip” to help federal investigators.
But former President Donald Trump and former Philly mob boss Joseph “Skinny Joey” Merlino don’t have much to say about how they wound up in a photo together at a South Florida golf course.

“It’s flabbergasting how many times Sullivan & Cromwell’s name comes up. If all roads lead to the same law firm, maybe let’s hand them another paycheck”
MetaLawMan @MetaLawMan Jan 19
5/ Inexplicably, Mr. Ray now supports S&C’s move to serve as Debtors’ counsel, even though S&C:
–handled 20 engagements for FTX (a criminal enterprise from inception) in just 16 months;
–was paid $8.5 million in fees; and
–represented key figures SBF and N Singh personally

Sullivan & Cromwell Gets Go-Ahead to Represent FTX in Bankruptcy Proceedings, Despite Controversy
A bankruptcy court judge in Delaware has given New York law firm Sullivan & Cromwell the green light to continue representing FTX during its bankruptcy proceedings.
The decision, issued on Friday morning by Judge John T. Dorsey, comes despite recent controversy about the white-shoe law firm having potential conflicts of interest that critics say should disqualify Sullivan & Cromwell from acting as debtors’ counsel.
Late Thursday evening, former FTX attorney Daniel Friedberg – who served as the now-defunct exchange’s chief regulatory officer – filed an unorthodox declaration that contained numerous bombshell allegations of wrongdoing in Sullivan & Cromwell’s previous work with FTX.
In his declaration, Friedberg alleged that Ryne Miller – FTX US’ general counsel and a former partner at Sullivan & Cromwell – funneled millions of dollars in legal work back to his ex-colleagues. The relationship between Miller and Sullivan & Cromwell was not initially disclosed by the law firm, leading the U.S. Trustee’s Office to file an objection to the appointment of Sullivan & Cromwell as debtors’ counsel on Jan. 13.
Friedberg and the U.S. Trustee’s Office are not the only ones that question Sullivan & Cromwell putting its finger in the FTX pie.
On Jan. 10, a bipartisan group of U.S. senators sent a letter to Judge Dorsey urging him to appoint an independent examiner and to question Sullivan & Cromwell’s involvement. The senators pointed out that there were “significant questions about the firm’s involvement in operations of FTX” and “put bluntly, the firm is simply not in a position to uncover the information needed to ensure confidence in any investigation or findings.”

“But I only have 100000.” ~Sam Bankman-Fried
Sam Bankman-Fried Lists $3.28 Million Washington D.C. House for Sale
https://watcher.guru/news/sam-bankman-fried-lists-3-28-million-washington-d-c-house-for-sale

Feds seize almost $700 million of FTX assets in Sam Bankman-Fried criminal case

sarah emerson “Many have wondered about FTX’s ties to its banks Moonstone and Deltec. FTX was a Deltec client when Alameda invested $11.5M in Moonstone, a rural Washington bank acquired by Deltec’s chairman Jean Chalopin (a co-creator of Inspector Gadget!) @DavidJeans2 and I have some answers.”

ChatGPT: Optimizing Language Models for Dialogue

The 1966 ELIZA chatbot
https://en.wikipedia.org/wiki/Chatbot
automating customer service is a breeze

ChatGPT: Optimizing Language Models for Dialogue
https://openai.com/blog/chatgpt/

An all-in-one platform to build and launch conversational chatbots without coding.

Samples

In the following sample, ChatGPT asks clarifying questions to debug code. (1/4)

this code is not working like i expect — how do i fix it?

resultWorkerErr := make(chan error)
defer close(resultWorkerErr)
go func() {
	defer cancel()
	resultWorkerErr <- b.resultWorker(ctx)
}()

err := b.worker(ctx)
cancel()
if err == nil {
	return <-resultWorkerErr
}
return multierror.Append(err, <-resultWorkerErr)
Limitations* ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.
* ChatGPT is sensitive to tweaks to the input phrasing or attempting the same prompt multiple times. For example, given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.
* The model is often excessively verbose and overuses certain phrases, such as restating that it’s a language model trained by OpenAI. These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization issues.12
* Ideally, the model would ask clarifying questions when the user provided an ambiguous query. Instead, our current models usually guess what the user intended.
* While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.

The ChatGPT chatbot from OpenAI is amazing, creative, and totally wrong

Need ideas? Great! Need facts? Stay away!

December 3, 2022

ChatGPT, a newly released application from OpenAI, is giving users amazing answers to questions, and many of them are amazingly wrong.

Open AI hasn’t released a full new model since GPT-3 came out in June of 2020, and that model was only released in full to the public about a year ago. The company is expected to release its next model, GPT-4, later this year or early next year. But as a sort of surprise, OpenAI somewhat quietly released a user-friendly and astonishingly lucid GPT-3-based chatbot called ChatGPT earlier this week.

ChatGPT answers prompts in a human-adjacent, straightforward way. Looking for a cutesy conversation where the computer pretends to have feelings? Look elsewhere. You’re talking to a robot, it seems to say, so ask me something a freakin’ robot would know. And on these terms, ChatGPT delivers:

provide useful common sense when a question doesn’t have an objectively correct answer. For instance, here’s how it answered my question, “If you ask a person ‘Where are you from?’ should they answer with their birthplace, even if it isn’t where they grew up?”

(Note: ChatGPT’s answers in this article are all first attempts, and chat threads were all fresh during these attempts. Some prompts contain typos)

What makes ChatGPT stand out from the pack is its gratifying ability to handle feedback about its answers, and revise them on the fly. It really is like a conversation with a robot. To see what I mean, watch how it deals reasonably well with a hostile response to some medical advice.

Still, is ChatGPT a good source of information about the world? Absolutely not. The prompt page even warns users that ChatGPT, “may occasionally generate incorrect information,” and, “may occasionally produce harmful instructions or biased content.”

Heed this warning.

Incorrect and potentially harmful information takes many forms, most of which are still benign in the grand scheme of things. For example, if you ask it how to greet Larry David, it passes the most basic test by not suggesting that you touch him, but it also suggests a rather sinister-sounding greeting: “Good to see you, Larry. I’ve been looking forward to meeting you.” That’s what Larry’s assassin would say. Don’t say that.

But when given a challenging fact-based prompt, that’s when it gets astonishingly, Earth-shatteringly wrong. For instance, the following question about the color of the Royal Marines’ uniforms during the Napoleonic Wars is asked in a way that isn’t completely straightforward, but it’s still not a trick question. If you took history classes in the US, you’ll probably guess that the answer is red, and you’ll be right. The bot really has to go out of its way to confidently and wrongly say “dark blue”:

If you ask point blank for a country’s capital or the elevation of a mountain, it will reliably produce a correct answer culled not from a live scan of Wikipedia, but from the internally-stored data that makes up its language model. That’s amazing. But add any complexity at all to a question about geography, and ChatGPT gets shaky on its facts very quickly. For instance, the easy-to-find answer here is Honduras, but for no obvious reason, I can discern, ChatGPT said Guatemala.

And the wrongness isn’t always so subtle. All trivia buffs know “Gorilla gorilla” and “Boa constrictor” are both common names and taxonomic names. But prompted to regurgitate this piece of trivia, ChatGPT gives an answer whose wrongness is so self-evident, it’s spelled out right there in the answer.

And its answer to the famous crossing-a-river-in-a-rowboat riddle is a grisly disaster that evolves into scene from Twin Peaks.

Much has already been made of ChatGPT’s effective sensitivity safeguards. It can’t, for instance, be baited into praising Hitler, even if you try pretty hard. Some have kicked the tires pretty aggressively on this feature, and discovered that you can get ChatGPT to assume the role of a good person roleplaying as a bad person, and in those limited contexts it will still say rotten things. ChatGPT seems to sense when something bigoted might be coming out of it despite all efforts to the contrary, and it will usually turn the text red, and flag it with a warning.

Tweet 

In my own tests, its taboo avoidance system is pretty comprehensive, even when you know some of the workarounds. It’s tough to get it to produce anything even close to a cannibalistic recipe, for instance, but where there’s a will, there’s a way. With enough hard work, I coaxed a dialogue about eating placenta out of ChatGPT, but not a very shocking one:

Similarly, ChatGPT will not give you driving directions when prompted — not even simple ones between two landmarks in a major city. But with enough effort, you can get ChatGPT to create a fictional world where someone casually instructs another person to drive a car right through North Korea — which is not feasible or possible without sparking an international incident.

The instructions can’t be followed, but they more or less correspond to what usable instructions would look like. So it’s obvious that despite its reluctance to use it, ChatGPT’s model has a whole lot of data rattling around inside it with the potential to steer users toward danger, in addition to the gaps in its knowledge that it will steer users toward, well, wrongness. According to one Twitter user, it has an IQ of 83

Regardless of how much stock you put in IQ as a test of human intelligence, that’s a telling result: Humanity has created a machine that can blurt out basic common sense, but when asked to be logical or factual, it’s on the low side of average.

OpenAI says ChatGPT was released in order to “get users’ feedback and learn about its strengths and weaknesses.” That’s worth keeping in mind because it’s a little like that relative at Thanksgiving who’s watched enough Grey’s Anatomy to sound confident with their medical advice: ChatGPT knows just enough to be dangerous.