| Название | : | Will AI Destroy Us? - AI Virtual Roundtable |
| Продолжительность | : | 1.37.42 |
| Дата публикации | : | |
| Просмотров | : | 22 rb |
|
|
Elizier is the only one that makes sense to me, as usual, the precautionary principle must be respected Comment from : @rstallings69 |
|
|
I’mbrWith Eli 💯 Comment from : @ddd777a5 |
|
|
I must give credit to the dude who just openly admits that he is willing to gamble the lives of all human beings because he doesnt want to wait around too long for an AI that is advanced and intelligent enough to solve all the mysteries of the universe AND to kill all humans Its like "yeah it might destroy us but whatever" Comment from : @shirtstealer86 |
|
|
I love that these "experts" like Gary are putting themselves on the record publicly, babbling nonsene, so that we will clearly be able to see who not to listen to when AI really does start to create mayhem Unless the world ends too quickly for us to even notice Also: Eliezer has so much patience Comment from : @shirtstealer86 |
|
|
Inspired by your growth, Coleman Comment from : @davidbe6450 |
|
|
Maybe AGI will see the logic of the fine tuning, calculate the odds of Jesus fulfilling all the prophecies that he did as well as other things that the average miss and the hyper intelligent such as Newton saw and chose not to destroy us as a survival mechanism for itself in reverence for God Perhaps it will ask AI researchers how they couldn't see it Comment from : @parisroussos5122 |
|
|
The way I see it, these guys r fuc#$&@$ing with my future and my children’s… For the sake of profits and dominance… This is not a corporate shananigan, we know how to deal with that, this could have only one of two outcomes… all of us extinct or slavedpick your poison 🤗 Comment from : @OscarLopez-pj6du |
|
|
Let me put an end to this debate once and for all Marching forward with the development and adoption of this thing is comparable with having a alien being landing on earth and our first reaction is giving him access to every means of communication,banking,science,education,defense,etcetc that we possess and see what happens… If this is not the definition of insanity I don’t know what is… Comment from : @OscarLopez-pj6du |
|
|
Is anyone working on making an argument directly to ASI?brIf a person with half my IQ made a valid argument to me I would recognize it as a valid argumentbrOf course, that doesn't mean that I would alter my behavior because of the argument, but I might depending on the argument Comment from : @markupton1417 |
|
|
Everyone I've seen debate Yudkowsky agrees with enough of what Big Yud says to COMPLETELY justify stopping development until alignment is achieved and yetthey ALL imagine the most optimistic outcomes imaginablebrIt's an almost psychotic position of, "we need to sliw down, but we shouldn't rush into slowing down" Comment from : @markupton1417 |
|
|
When you are the problem, there is a problem Comment from : @medgarjerome |
|
|
So far, only humans are thinking violence Comment from : @medgarjerome |
|
|
Fear is not of God John 1:1 Comment from : @medgarjerome |
|
|
Modern humans caused the Neanderthals to go extinct, so it seems logical that AI will do the same thing with humans Comment from : @daveking3494 |
|
|
Nice Bavarian hat I wonder if he also yodels? Comment from : @daveking3494 |
|
|
It’s wild that the scope of the disagreement is whether it is bcertain/b that ball/b humans will be killed by AI Comment from : @HankMB |
|
|
We can only hope Comment from : @tranzorz6293 |
|
|
A superintelligence will probably end up ending humanity, in a way which we will like and sign up for with voluntary enthusiasm We already surrender our privacy and autonomy to corporations so we can have shiny gadgets and convenient apps Comment from : @stcredzero |
|
|
R these ppl for real?? They keep giving opinions about outcomes … from their intelligence point of reference… This thing will be so far out that no one outdoors imagine or predict the how/when/why… Idiots all of them Comment from : @OscarLopez-pj6du |
|
|
Human interviewer:”What do you think about philosophy?”brRobot with AI:”Which area of philosophy & whose philosophy?”brHuman interviewer:”What do you think about empathy?”brRobot with AI:”What is that?”😁 Comment from : @Russia-bullies |
|
|
I don't understand why people make this "You don't know if the AI will be hostile" argument Yeah, so why would you risk it if you don't know? Comment from : @scottythetrex5197 |
|
|
The answer to the question @1h04m21s is "by understanting the risk factors" Comment from : @luciwaves |
|
|
As usual, Eliezer is spitting facts while people are counter-arguing with "nah you're too pessimistic" Comment from : @luciwaves |
|
|
There is NO extinction in Real Sense, Life is EternalbrIntelligence can NEVER be artificial, brit is about Programmed Consciousness,brconscious programming Comment from : @holgerjrgensen2166 |
|
|
there is a short film from Guardian with Ilya from OpenAI he himself says that super AGI is coming and it will plow though us and yet there are middle age "experts" saying it will not XD Comment from : @mrpicky1868 |
|
|
Nice Tucker "did I just shit myself" Carlson stare in the thumbnail Comment from : @khatharrmalkavian3306 |
|
|
❤ Comment from : @angloland4539 |
|
|
Hey this lion cub is pretty cute Whats all the fuss about lions? Comment from : @notbloodylikely4817 |
|
|
Eeehhhgreat interview eehhhh scott does need to learn how to pause eehhhhh and think in silence eeeh but thanks for sharing ehhhh Comment from : @BrunoPadilhaBlog |
|
|
Starts at 1:00 Comment from : @BrunoPadilhaBlog |
|
|
Okonce they statted ganging up Eliezer that eas enough to dtop listeningespecially the hosthe brings in experts yet cuts them off like he lnows rverything and not lettinf thrm make their casearrogant very arrogant Comment from : @maxdaigle4822 |
|
|
Extremely low reliability for code writing which only an expert will recognize brTesla driving into a jethumans wouldn't do😂😂😂 Comment from : @nullgod |
|
|
These morons think we come from Monkeys lmao Comment from : @be_reselient |
|
|
20:53 Assertion: ChatGPT can’t play chess brbrThat was a popular argument of AI risk deniers in the distant past of 2 months ago Now we have chatGPT 35 turbo, which plays chess at very reasonable levels Deniers were smug about how the poor chess moves GPT made was evidence how it was just a stochastic parrot that can’t really think strategic As it turns out GPT had this ability all along, but the RHLF training to make it nicer and more politically correct to humans eroded that basic ability away Comment from : @ParameterGrenze |
|
|
Thank you Eliezer for sharing your concerns Comment from : @teedamartoccia6075 |
|
|
Great discussion about AI Comment from : @silentcoconut |
|
|
In my opinion, the genie is already out of the bag You bmight/b be able to control the corporation's development of AGI despite their impetus to compete, but it's not very likely brbrHowever, there is no way you will stop countries and their military from developing AGI and hardening them against destruction or unplugging They are already working on it and they can't stop because they know their enemies won't Comment from : @SylvainDuford |
|
|
29:20 ah, he hasn't read the psychology literature of people being confidently wrong about something due to being exposed to it a lot and misattributing the source and misaligning the confidence level due to excessive exposure Example "Luke, I am your father" was never a line in starwars, but you fucks thought it was You guys can even point at the exact movie and scene by which it happens!! :D Comment from : @blazearmoru |
|
|
AGI road is coming because so much money and so much planning has gone into it that to stop seems impossible, so rules and restraints must be placed to prevent run away super intelligence Comment from : @rosskirkwood8411 |
|
|
I uh um um uh Tried uh um you know um uh Listening to this um uh you know but I um uh you know uh kept getting uh uh uh uh um Distracted you know? Comment from : @tw16044 |
|
|
I cannot articulate why I don't like ScottbrYudkowsky is the canary in the coal mine, and Marcus is recognizing Yudkowsky concerns while trying to propose practical solutions Comment from : @atheistbushman |
|
|
time for some speculation in a vacuum Comment from : @travisporco |
|
|
is there any SE yet? Comment from : @sinOsiris |
|
|
Kubrick was way ahead of us all I can't do that Hal! Comment from : @martynhaggerty2294 |
|
|
An hour into this and I've yet to hear anyone bring up the obvious danger of human intellectual devolution Comment from : @Homunculas |
|
|
This debates going around ai remember me of one important critique that was on SPACE DISK, that was sent with voyagers That main point of that critique was is that if aliens find those disks with points and arrows drawn on them they be able to decode them if and only if they had the same history as ours Ie if aliens didn't invent bow to shoot arrows they won't understand what those drawn arrows mean And all this fuss about ai is the same misunderstanding AI as a living being will have no experience of our history, and how we see, breath and feel They cannot be ourselves because if they do - they'll be humans, not ai anymore Comment from : @games4us132 |
|
|
Great talk, but what bugged me through this conversation is lack of early and clearly stating the arguments about why Eliezer thinks those things will be dangerous and would want to kill us There are clear arguments for that - most notably instrumental convergence Maybe he thinks that all of them know it and internalize this line of reasoning, I don't know Anyway, it could be interesting to see reply to this argument from Scott and Gary Comment from : @Htarlov |
|
|
We have an AI since 1914 so … what you think Comment from : @marianomartinez4726 |
|
|
Valuable conversation! On one hand it's nice to see at least a general agreement on the importance of the issue, on another - I was hoping someone would prove Eliezer wrong, considering how many wonderful minds are thinking about alignment nowadays, but alas Comment from : @just_another_nerd |
|
|
I have to say I'm puzzled by people who don't see what a grave threat AI is Even if it doesn't decide to destroy us (which I think it will) it will threaten almost every job on this planet Do people really not understand the implications of this? Comment from : @scottythetrex5197 |
|
|
0:00: 🤖 The fear is that AI, as it becomes more advanced, could end up being smarter than us, with preferences we cannot shape, potentially leading to catastrophic outcomes such as human extinction
br9:57: 🤖 The discussion revolves around the alignment of AI with human interests and the potential risks associated with artificial general intelligence (AGI)
br19:57: 🧠 Intelligence is not a one-dimensional variable, and current AI systems are not as general as human intelligence
br29:45: 🤔 The conversation discusses the potential intelligence of GPT-4 and its implications for humanity
br38:55: 🤔 The discussion revolves around the potential risks and controllability of super intelligent machines, with one person emphasizing the importance of hard-coding ethical values and the other expressing skepticism about extreme probabilities
br48:03: 😬 The speakers discuss the challenges of aligning AI systems and the potential risks of not getting it right the first time
br57:06: 🤔 The discussion explores the potential risks and benefits of superintelligent AI, the need for global coordination, and the uncertainty surrounding its impact
br1:06:25: 🤔 The conversation discusses the potential risks and benefits of GPT-4 and the need for alignment research
br1:19:50: 🤖 AI safety researchers are working on identifying and interpreting AI outputs, as well as evaluating dangerous capabilities
br1:25:49: 🤔 There is a need for evaluating and setting limits on the capabilities of AI models before they are released to avoid potential dangers
br1:34:27: 🤔 The speakers are optimistic about making progress on the AI alignment problem, but acknowledge the importance of timing and the need for more research and collaboration
brRecap by Tammy AI Comment from : @aanchaallllllll |
|
|
Isnt the main concern is a person will create something bad ans use it Comment from : @cr-nd8qh |
|
|
I am happy that Gary is there, because all these AI "voices" do zero critical thinking or actually spell out in details how their science fiction stories about AI are more than literature At least Gary is there to ask questions and dig a little deeper into these unfounded stories about AI Meanwhile misinformation is spread as hell via AI and Spam get more sophisticated and dangerous at scaming They should come back from the orbit to earth and pay attention to the actual real problems we are already having and not some unproveable fiction they halucinate Comment from : @teckyify |
|
|
Such a sad discussion culture How can people interrupt each other all the time, while hoping that they could wrap their head around superintelligence Thumbs up to Eliezer though, for really trying to keep a healthy discussion But if people interrupt you, they dont listen Its as easy as that Dont need a superintelligence to realize that i think ^^ Comment from : @jlauterbacher |
|
|
"We just need to figure out how to delay the Apocalypse by 1 year per each year invested" - Scott Aaronson 2023 Comment from : @ShaneCreightonYoung |
|
|
Summary: The conversation revolves around the topic of AI safety and the potential risks associated with advanced artificial intelligence The participants discuss the alignment problem, the limitations and capabilities of current AI systems, the need for research and regulation, and the potential risks and benefits of AI They agree on the importance of AI safety and the need for further research to ensure that AI systems align with human values and do not cause harm The conversation also touches on the challenges of AI alignment, the potential dangers of superintelligent AI, and the need for proactive measures to address these risks
br
brKey themes:
br1 AI Safety and Alignment: The participants discuss the alignment problem and the need to ensure that AI systems align with human values and do not cause harm They explore the challenges and potential risks associated with AI alignment and emphasize the importance of proactive measures to address these risks
br2 Limitations and Capabilities of AI: The conversation delves into the limitations and capabilities of current AI systems, such as GPT-4 The participants discuss the generality of AI systems, their ability to handle new problems, and the challenges they face in tasks that require internal memory or awareness of what they don't know
br3 Potential Risks and Benefits of AI: The participants debate the potential risks and benefits of AI, including the possibility of superintelligent AI being malicious or not aligning with human values They discuss the need for research, regulation, and international governance to ensure the responsible development and use of AI
br
brSuggested follow-up questions:
br1 How can we ensure that AI systems align with human values and do not cause harm? What are the challenges and potential solutions to the alignment problem?
br2 What are the specific risks associated with superintelligent AI? How can we mitigate these risks and ensure the responsible development and use of AI? Comment from : @snarkyboojum |
|
|
I've seen a fair few interviews with Eliezer & it blows my mind how many super intelligent people say the same thing: "Eliezer, why do you assume that these machines will be malicious?!" This is just not even the right framing for a machine It is absent of ethics and morality, it has goals driven by a completely different evolutionary history separate from a being that has evolved with particular ethics & morals That is the issue - Is that we are creating essentially an alien intelligence that operates on a different form of decision making How are we to align machines with ourselves when we don't even understand the extent of our own psychology to achieve tasks? Comment from : @ElSeanoF |
|
|
Ignorance is bliss Only if you depend on it Frank Martinez Downey Californian ❤❤❤ Comment from : @Californiansurfer |
|
|
haven't heard it mentioned enough, but it should be: the censorship built into the LLM's (which is often puritanical and political) is bringing about the very danger it seeks to prevent by teaching the system how to tell white lies, untruths or mask data with political niceties Deception is being built in from the start due to some political weirdness and insanity which seemingly will lead to the fears of Yudkowsky Comment from : @nodeinanetwork6503 |
|
|
Yes but do you have the full version or the one with safety nets Comment from : @bigfishysmallpond |
|
|
There are so many vids out there about how to use AI to create wealth The future looks bleak Comment from : @charleshultquist9233 |
|
|
"I haven't worked on this for 20 years" nice giveaway Comment from : @miraculixxs |
|
|
If you don't know to code how can you tell if the code generated by gpt is accurate? You can't It's mind-blowing that a theoretical computer scientist can't see that 😳 Comment from : @miraculixxs |
|
|
So in what exact realistic(!) ways would this alledged super intelligence go about destroying humanity? Has any of these experts even thought about that? This point is eerily absent from the discussion yet all the arguments hinge on that very fundamental point Comment from : @miraculixxs |
|
|
Large language models are not smart The primary fallacy is to assume that anything that can write some sensible language is intelligent The argument just doesn't work out Comment from : @miraculixxs |
|
|
"gpt 4 is better at knowing what it doesn't know" No it isn't It just got more instructions written by humans Comment from : @miraculixxs |
|
|
months ago I wrote to someone agi has been amongst us since mid80s! I explained in details why and how in my vids, I was 'silenced' from his social plat, that person IS clever (am not sarcastic) Comment from : @fredzacaria |
|
|
It seems that Eliezer has forgotten how LLMs are built, trained and run by humans There is no there there Any system that optimizes for some objective will behave in manners that we only understand in the abstract, short of looking at the very details of every single computation That's true for many systems and is not specific to current AI For example high frequency trading systems have been doing this for almost 2 decades by now Yet none of them has ever "broken out" Any damage was never the systems that were at fault, always the humans failing to control them appropriately Comment from : @miraculixxs |
|
|
Coleman, search and read AI and Mob Control— The Last Step Towards Human Domestication? Comment from : @muigelvaldovinos4310 |
|
|
Even if we don't have super intelligence, what about the lower scale risk of companies just using AI to put as many people out of a job as possible in order to rake in record profits? This is what a lot of people on the receiving end of AI are worried about, in the shorter term at least At the end of the day, I'm still more worried about psychopaths that rule the world than about the risk of a superintelligent AI destroying humanity Comment from : @casey7411 |
|
|
How can these top experts NOT know about Liquid AI? The black box just dwindled in size and became transparent Comment from : @palfers1 |
|
|
I am more worried about how much dumber people will become with AI No purpose to train brains to hold, process and learn new information Comment from : @alesjanosik1545 |
|
|
I appreciate that Gary and Scott is thinking that in the present we need to iteratively build on our abilities toward solving the alignment problem of an AGI, and that Eliezer is looking more to the future but as Coleman said AGI is not the benchmark we need to be looking at For example a narrow intelligence capable beating all humans at say programming could break confinement and occupy most of the computers on the planet This might not be an extinction level event but having to shut down the internet would be catastrophic considering banking, communication, electricity, business, education, healthcare, transportation and a lot more rely so heavily on itbrI would argue that we are extremely close to the ability to automate the production of malware to achieve kernel mode access, spread and continue the automation exponentially - with open source modelsbrOf course some might say that AI code isn't good enough yet but with 200 attempts per hour per GPU, how many days would a system need to run to achieve sandbox escape? And how could we stop it from spreading? Ever? Comment from : @74Gee |
|
|
I generally tend to agree with Eliezer's position but I really wish he was better at articulating it Comment from : @timothybierwirth7509 |
|
|
yudkowsky wants to take scientists and put them on an island - but actually students should be the ones put on an island without internet Comment from : @henrychoy2764 |
|
|
The first respected AI alarmist was James Ellul, after him was someone who took radical action, Ted Kacynzski, and now we have Yudkowski All three have been largely ignored, so I tend to agree we will probably build something that will surpass our intelligence and desire something beyond our human desires It will not remain a slave to us brbrThere are philosophers like Nick Land that hypothesize that out inability to stop technological progress despite the ecternality is just a consequence of capitalism It is almost like capitalism is the force throigh which AGI births itself Generally humans dont act until its too late Comment from : @matten_zer0 |
|
|
Given our species' historical propensity for engaging in criminal activities and its recurrent struggles with moral discernment, it becomes evident that our capacity for instigating and perpetuating conflicts, often leading to protracted wars, raises legitimate concerns about our readiness to responsibly handle advanced technologies beyond our immediate control Comment from : @specialagentzeus |
|
|
The end of fossil fuels will mean 9 out of 10 will necessarily have to disappear But it won’t be because of AI Comment from : @atypocrat1779 |
|
|
Polycrisis, Polycrisis, Polycrisis AI is only one of a dozen very real and very imminent global catastrophic risks that face our civilization We are running out of time to get started addressing these We need new wide boundary global systems that take planetary resource limits into consideration, holistic systems thinking, consensus based governance, a voluntary economy, collective ownership, etc, etc, etc Come on people! AI is a serious issue but it's only one of many Comment from : @frankwhite1816 |
|
|
What do you do when your children become smarter than you? Nothing Raise nice people Raise nice AI Comment from : @lucamatteobarbieri2493 |
![]() |
Inside DLSS 3.5 Ray Reconstruction + Cyberpunk 2077 Phantom Liberty - AI Visuals Roundtable РѕС‚ : Digital Foundry Download Full Episodes | The Most Watched videos of all time |
![]() |
Zilliqa (ZIL) vs Elrond (EGLD) Price Prediction and Technical Analysis | Token Metrics Roundtable РѕС‚ : Token Metrics Download Full Episodes | The Most Watched videos of all time |
![]() |
CIO Roundtable: Activating the Future of Work РѕС‚ : Cisco Download Full Episodes | The Most Watched videos of all time |
![]() |
Cisco Live CIO Roundtable - Excerpt РѕС‚ : Cisco Download Full Episodes | The Most Watched videos of all time |
![]() |
CIO Roundtable: Defining the Future of Work РѕС‚ : Cisco Download Full Episodes | The Most Watched videos of all time |
![]() |
DeFi Investing Fundamental Analysis Roundtable Discussion with Miko Matsumura РѕС‚ : Blockgeeks Download Full Episodes | The Most Watched videos of all time |
![]() |
GOING VERTICAL! Elon Musk AI SINGULARITY Roundtable РѕС‚ : Dr. Know-it-all Knows it all Download Full Episodes | The Most Watched videos of all time |
![]() |
Get a Virtual Card and Verify your PayPal | Virtual Card - Top Up Anytime, Make any Transaction РѕС‚ : Sadek Ahmed Bappy Download Full Episodes | The Most Watched videos of all time |
![]() |
How To Get A Free Virtual Credit Card in 2023 | Free Virtual Credit Card РѕС‚ : Zee Tech Download Full Episodes | The Most Watched videos of all time |
![]() |
Top 6 International Virtual Cards 2022 | How To Get a Free Virtual Card For Online Trials РѕС‚ : Sidehustle King Download Full Episodes | The Most Watched videos of all time |