OMG GPT2 TO GPT4 Unleashed above 90% human intelligence

©1998 *Educational CyberPlayGround®
https://edu-cyberpg.com
Sign Up NetHappenings Blog News Headlines Email List
( ͡° ͜ʖ ͡°) ©ECP  NetHappenings  Blog   https://cyberplayground.org
Follow Twitter @CyberPlayGround
©1993 https://k12playground.com
© https://RichAsHell.com

Zach Vorhies / Google Whistleblower
Google Whistleblower via James O’Keefe .
Disclosed Google’s “Machine Learning Fairness”, the AI system that censors and controls your access to information.

My god, this paper by that open ai engineer is terrifying.
Everything is about to change.
AI super intelligence by 2027.

SITUATIONAL AWARENESS:
The Decade Ahead

@Perpetualmaniac· Jun 13

Please follow @leopoldasch

It’s a 140+ pages. I’m going to post the video analysis. Get a drink. You are going to need it.
https://situational-awareness.ai/

They are doing to summon a god. And we can’t do anything to stop it. Because if we do, the power will slip into the hands of the CCP.
“Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans.”
“Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might.”
“We are building machines that can think and reason. By 2025/26, these machines will outpace many college graduates. By the end of the decade, they will “Along the way, national security forces not seen in half a century will be unleashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.” be smarter than you or I; we will have superintelligence, in the true sense of the word.”

 

“AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into ≤1 year. We would rapidly go from human-level to vastly superhuman AI systems. The power—and the peril—of superintelligence would be dramatic.”

“Reliably controlling AI systems much smarter than we are is an unsolved technical problem. And while it is a solvable problem, things could easily go off the rails during a rapid intelligence explosion. Managing this will be extremely tense; failure could easily be catastrophic.”

“Superintelligence will give a decisive economic and military advantage. China isn’t at all out of the game yet. In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers? And will we manage to avoid self-destruction along the way?”
PICTURE

“As the race to AGI intensifies, the national security state will get involved. The USG will wake from its slumber, and by 27/28 we’ll get some form of government AGI project. No startup can handle superintelligence. Somewhere in a SCIF, the endgame will be on. “
“I make the following claim: it is strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer. That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph.”
PICTURE

“The pace of deep learning progress in the last decade has simply been extraordinary. A mere decade ago it was revolutionary for a deep learning system to identify simple images. Today, we keep trying to come up with novel, ever harder tests, and yet each new benchmark is quickly cracked. It used to take decades to crack widely-used benchmarks; now it feels like mere months.”

“We’re literally running out of benchmarks. As an anecdote, my friends Dan and Collin made a benchmark called MMLU a few years ago, in 2020. They hoped to finally make a benchmark that would stand the test of time, equivalent to all the hardest exams we give high school and college students. Just three years later, it’s basically solved: models like GPT-4 and Gemini get ~90%.”

“To see just how big of a deal algorithmic progress can be, consider the following illustration of the drop in price to attain ~50% accuracy on the MATH benchmark (high school competition math) over just two years… efficiency improved by nearly 1,000x—in less than two years.”
PICTURE

“In this piece, I’ll separate out two kinds of algorithmic progress. Here, I’ll start by covering “within-paradigm” algorithmic improvements—those that simply result in better base models, and that straightforwardly act as compute efficiencies or compute multipliers. For example, a better algorithm might allow us to achieve the same performance but with 10x less training compute. In turn, that would act as a 10x (1 OOM) increase in effective compute. (Later, I’ll cover “unhobbling,” which you can think of as “paradigm-expanding/application-expanding” algorithmic progress that unlocks capabilities of base models.)

If we step back and look at the long-term trends, we seem to find new algorithmic improvements at a fairly consistent rate. Individual discoveries seem random, and at every turn, there seem insurmountable obstacles—but the long-run trendline is predictable, a straight line on a graph. Trust the trendline.”
PICTURE

The AI models can vastly improve just by “unhobbling”, aka unlocking improvements by better accessing data already in the model (latent space).
“A survey by Epoch AI of some of these techniques, like scaffolding, tool use, and so on, finds that techniques like this can typically result in effective compute gains of 5-30x on many benchmarks.

METR (an organization that evaluates models) similarly found very large performance improvements on their set of agentic tasks, via unhobbling from the same GPT-4 base model: from 5% with just the base model, to 20% with the GPT-4 as posttrained on release, to nearly 40% today from better posttraining, tools, and agent scaffolding.”
PICTURE

We aren’t just up against one exponential drivers of AI progress, but at least three, all composing together.

https://x.com/Perpetualmaniac/status/1801637823818203592/photo/1

“By 2027, rather than a chatbot, you’re going to have something that looks more like an agent, like a coworker.”

AI Super Intelligence:

“We don’t need to automate everything—just AI research”

“Once we get AGI, we’ll turn the crank one more time—or two or three more times—and AI systems will become superhuman—vastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualitatively smarter than an elementary schooler. “
PICTURE
https://x.com/Perpetualmaniac/status/1801640763941343379/photo/1

“We’d be able to run millions of copies (and soon at 10x+ human speed) of the automated AI researchers.”

“…given inference fleets in 2027, we should be able to generate an entire internet’s worth of tokens, every single day.”

WATCH

i’m sorry – I wasn’t pessimistic enough about the AI situation – the numbers are WAY worse than I imagined.
https://x.com/Perpetualmaniac/status/1801662699312435212

Video Analysis of

“SITUATIONAL AWARENESS:
The Decade Ahead”

https://www.youtube.com/watch?v=om5KAKSSpNg

Everything is going to change.

We are NOT going back to normal.

▓▓▓—▓▓▓—▓▓▓—▓▓▓—▓▓▓—▓▓▓

Leave a Reply

Your email address will not be published. Required fields are marked *