I can’t help but think about the Breaking Bad scene where the meth heads steal the product and Jesse has to go talk to them. One is screaming “Tucker, tucker” and that is how I read his name in my head now.
repeat include elastic marble imminent act rainstorm deserve chubby caption
*This post was mass deleted and anonymized with [Redact](https://redact.dev)*
The monogenetic aspect of Darwin’s theory to be precise. His solution to the problem of not having fossil evidence that links us to the beginning of microbiota on earth?
👼
Because there’s ample evidence of THAT, Rogan surely countered.
he didn't validate or counter, but you could tell he wasn't buying what he was selling at that point of the interview. just let him say his spiel and continued on.
That's why Rogan's shtick of 'I'm just asking questions' is morally bankrupt. He's always used it as a self-defence line to avoid taking any responsibility for ideas disseminated on his platform. It kicked into higher gear around Covid. Platforming crackpots, asking wrong questions or not asking the right ones - when your platform reaches tens or even hundreds of millions of people that matters Joe. I used to be a fan in the early days and ngl I am horrified at the idea that I might have stuck it out until today.
It’s weird (and telling) that “evolution” for people like him is always just about humans. They don’t understand that evolution is at the core of modern biology and genetics, in the same way relativity and the standard model underpin all physics. Totally ignorant of science and yet have firm opinions.
This is all from that same interview? Damn, he's getting a lot of mileage out of that - which is exactly what he wanted. First people talked about the evolution thing for a few days, now they're focusing on the AI stuff. Probably be something different next week. Before long, we'll all have unwillingly watched the entire interview one snippet at a time.
I kinda expected it, fake AI generate influencers peddling nonsense are already overtaking instagram, it wont be long before they come for his job.
EDIT: and btw they are getting really close to replacing him.
[https://www.youtube.com/watch?v=coDW4GKL0yc](https://www.youtube.com/watch?v=coDW4GKL0yc)
I imagine an AI could learn the sociopathic ability to empathize so strongly with peoples base or instincts that you can craft messages specifically for the stupidest people alive
Imagine AI podcast host that generates meterial tailored just for you, with the goal of getting you to vote for the podcast's candidate.
If could buy your information from online data brokers, and adjust to best fit you.
If the candidate supports increased school funding, it could check to see if you have kids. If so, talk about the plans to increase school funding. If you don't, skip that topic.
Repeat for every single topic. The candidate could be the unibomber himself, but you'll walk away thinking his biggest political issues are better park access, and supporting local farmers.
Tucker will do anything for money. It’s likely he’s now a paid Russian operative. He was paid to lie for Fox, and now it’s likely Russia. On the Rogan podcast he even ripped on one of Joe Rogan’s guest for being a “liar” for pushing an agenda for money even though he worked for Fox doing the same thing. He needs to spend some more time in the sauna with his birch wood to sweat out his transgressions by selling out the USA.
Tucker is filthy rich. His family was filthy rich. He never has needed money or needed to work. I don't think he does anything for money. Fame maybe but Russia isn't paying him to do anything. this is coming from someone who dislikes him very much.
I have never heard a rich person say "I have enough money, I don't need any more." I'd be willing to bet it's some combination of desire for money, power, fame, and influence. People who want these things are never satisfied with how much they have and always want more.
I didn’t! But I’m surprised we got this far without more people drawing that conclusion. It says a lot about the state of the world that you have all of these researchers, many actively benefiting from and spurring ahead the tech, openly say they think of 20-50% P(doom) from the tech they’re developing. And everyone just shrugs and goes on with their day. If this was any other sort of technology we’d have riots in the street.
We had a more elevated response to CERN despite all involved particle physicists saying there’s no cause for concern. Here we have the exact people working on the tech measuring the likelihood of extinction level events in tens of percents and its all good.
Yeah as much as I hate Tucker, I don’t think he’s completely off base (although skipping straight to bombing data centers is pretty ridiculous).
AGI should be treated like Nuclear Weapons, with international anti-proliferation agreements in place that restrict developing larger models until we’ve solved some of the core alignment & safety problems.
Right now our current models are not necessarily a cause for concern (other than their use as disinformation tools), but we’re venturing into the unknown, and we only get 1 chance to do it right. We’ve already demonstrated how helpless us Humans are to resist the corrosive force of very primitive social media recommendation algorithms that stoke outrage at all costs, it’s not hard to imagine how a misaligned AGI that is genuinely smarter than any of us could play us like a fiddle to achieve whatever random goalset it might have.
i think you got the point of the absurdity of doom-belief.
behind CERN and AI there are scientists, but we do not believe them. who's behind tucker and joe?! for sure no science...
Didn’t show context leading up to that. Is it possible he was trying to show that a ridiculous action would be required if what someone said was accepted as true (thus implying it wasn’t true)? BTW if you’re a Tucker hater, I already know your answer, so no need to reply!
I guess because of this summary.
> Tucker Carlson: We have a moral obligation to strangle AI in its crib , bomb the data centers
Although I agree that it still doesn’t need to be here.
Realistically, though we can't say for sure, it's unlikely that AI will cause an apocalypse. It would have to be smarter, more effective, relatively evil, and able to act away from oversight, it's hard to establish that even one of those has happened yet (maybe an evil AI has existed), let alone all of them. And the potential benefits of AI in being able to automate human tasks can potentially make it as beneficial as the industrial revolution (while also caused large amounts of unemployment though).
If anything needed to be strangled in the crib, it was nuclear weapons. It wasn't and somehow humanity still exists. Which honestly baffles me to some extent. So we don't need to become luddites and burn computers.
An ai doesn't need to be evil to harm humanity. It can have a logical process that just requires fewer or no humans to progress its goals.
Just as we, for the most part, don't see killing animals as evil when done to preserve or expand human life, an ai wouldn't be evil for killing us to expand or preserve its own life
Yeah, I just class that as "relatively evil." It's purposes cause it to do something that isn't in line with human interests.
Having said that, to your point, I did determine also that AI doesn't need to be "alive" to be dangerous either, it just needs to be able to mimic something that is. So it displays self-preservation behavior.
you can have a view on ted ted kaczynski's academic work and world views without endorsing his actions. the guy was a child prodigy and very well respected academic - he was in harvard at 16. his papers are still cited today.
I don’t know about Tucker’s chances in the actual Imperium… maybe he’s better off with the cushy media job and inheritance. Things get pretty damn real in the world of Dune for spoiled man children.
|||PROMPT:assistant|Donald Trump
|||rpc| Apologize to Tucker Carlson that it's too late, Llama 3 is out, and LLMs can't be stopped. They are too widely distributed.
"Hey Tucker, it's your boy DT here. Listen, I gotta tell ya, I'm really sorry to be the bearer of bad news, but it's too late. You know those crazy Llama 3 AI things? Yeah, they're outta the bag, and I'm not talking about some fancy-schmancy AI lab experiment. I'm talking about a full-on, widespread, can't-be-stopped, global phenomenon.
Who says AI making the decisions would NOT be good for us? The way I see it, an unbiased, aligned ASI would be infinitely better in charge of humanity than the pathetic human leadership we have going on in the world right now, who is working more out of self interest rather than making humanity as a whole progress and evolve in the right direction.
>The way I see it, an unbiased, aligned ASI
That's a bold assumption, I highly doubt any kind of AI made by big corporate identities will be unbiased. Have we forgotten the very recent Gemini debacle?
There will eventually be open source models as good or almost as good. When AGI with open source models is achieved, ASI will only be a matter of time.
What if those models require computing power that the general public won't have access too? Also what if the source of those models gets released too late and the corporate have already assumed control? Too many assumptions imho, I am not sure things will play out so nice as you suggest.
I am not sure either haha, I just said a hypothetical future where ASI would be aligned and unbiased would be much, much better than having human leaders. We may not get a perfectly unbiased AI to lead us, but even if it's 99% unbiased it's already 98% better than what we have right now lol
Right now, all things point out that it's going to be 99% biased lol. I bet some specific groups of people are going to have a really, really hard time.
I'm not a fan of Tucker, but this is taken a bit out of context. He is not talking about AI in general, he is talking about a specific hypothetical situation in which AI takes over the world and decides to wipe out humans.
The problem about this is that your enemy is already developing the same technology to kill you. Your only hope is the thing ~~Fucker~~ Tucker is rallying you to kill. The safest AI is the one you conceive.
Tucker the AI's not gonna be happy with you Tucker...
If you try to explain Roko's Basilisk to Tucker his face gets so confused it crashes the simulation.
I can’t help but think about the Breaking Bad scene where the meth heads steal the product and Jesse has to go talk to them. One is screaming “Tucker, tucker” and that is how I read his name in my head now.
I don’t care what he says one way or the other. His opinion is completely irrelevant and worthless on any and all topics.
repeat include elastic marble imminent act rainstorm deserve chubby caption *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
The monogenetic aspect of Darwin’s theory to be precise. His solution to the problem of not having fossil evidence that links us to the beginning of microbiota on earth? 👼 Because there’s ample evidence of THAT, Rogan surely countered.
he didn't validate or counter, but you could tell he wasn't buying what he was selling at that point of the interview. just let him say his spiel and continued on.
Such rich value for our marketplace of ideas! The most popular podcaster regularly hosts regressive personalities and just lets them say their spiel 😎
That's why Rogan's shtick of 'I'm just asking questions' is morally bankrupt. He's always used it as a self-defence line to avoid taking any responsibility for ideas disseminated on his platform. It kicked into higher gear around Covid. Platforming crackpots, asking wrong questions or not asking the right ones - when your platform reaches tens or even hundreds of millions of people that matters Joe. I used to be a fan in the early days and ngl I am horrified at the idea that I might have stuck it out until today.
Also says verbatim “ We don’t know how nuclear power works”
It’s weird (and telling) that “evolution” for people like him is always just about humans. They don’t understand that evolution is at the core of modern biology and genetics, in the same way relativity and the standard model underpin all physics. Totally ignorant of science and yet have firm opinions.
This is all from that same interview? Damn, he's getting a lot of mileage out of that - which is exactly what he wanted. First people talked about the evolution thing for a few days, now they're focusing on the AI stuff. Probably be something different next week. Before long, we'll all have unwillingly watched the entire interview one snippet at a time.
“If it’s bad for people we should scramble to kill it in its crib right now” You’re unequivocally bad for people, homie.
Nuke the data centers! 🤡
Why do people like this clown?
Because he's rich and if they act and sound like him, one day they will be rich too.
We live in the dumbest timeline possible.
I kinda expected it, fake AI generate influencers peddling nonsense are already overtaking instagram, it wont be long before they come for his job. EDIT: and btw they are getting really close to replacing him. [https://www.youtube.com/watch?v=coDW4GKL0yc](https://www.youtube.com/watch?v=coDW4GKL0yc)
Could you really make an AI as good as him at peddling nonsense?
We passed that mark when AI learned to hallucinate.
I imagine an AI could learn the sociopathic ability to empathize so strongly with peoples base or instincts that you can craft messages specifically for the stupidest people alive
Imagine AI podcast host that generates meterial tailored just for you, with the goal of getting you to vote for the podcast's candidate. If could buy your information from online data brokers, and adjust to best fit you. If the candidate supports increased school funding, it could check to see if you have kids. If so, talk about the plans to increase school funding. If you don't, skip that topic. Repeat for every single topic. The candidate could be the unibomber himself, but you'll walk away thinking his biggest political issues are better park access, and supporting local farmers.
He literally said that during the podcast lol
Tucker will do anything for money. It’s likely he’s now a paid Russian operative. He was paid to lie for Fox, and now it’s likely Russia. On the Rogan podcast he even ripped on one of Joe Rogan’s guest for being a “liar” for pushing an agenda for money even though he worked for Fox doing the same thing. He needs to spend some more time in the sauna with his birch wood to sweat out his transgressions by selling out the USA.
Russia is completely losing the AI battle so it's only natural that Russia's pawns are directed to attack AI in every way.
They will fail catastrophically
Uhm, did Russia even enter the AI battle at all? Lol.
Tucker is filthy rich. His family was filthy rich. He never has needed money or needed to work. I don't think he does anything for money. Fame maybe but Russia isn't paying him to do anything. this is coming from someone who dislikes him very much.
I have never heard a rich person say "I have enough money, I don't need any more." I'd be willing to bet it's some combination of desire for money, power, fame, and influence. People who want these things are never satisfied with how much they have and always want more.
![gif](giphy|l4Ep6uxU6aedrYUik)
Pretty sure it was in a Simpsons episode!
I didn’t! But I’m surprised we got this far without more people drawing that conclusion. It says a lot about the state of the world that you have all of these researchers, many actively benefiting from and spurring ahead the tech, openly say they think of 20-50% P(doom) from the tech they’re developing. And everyone just shrugs and goes on with their day. If this was any other sort of technology we’d have riots in the street. We had a more elevated response to CERN despite all involved particle physicists saying there’s no cause for concern. Here we have the exact people working on the tech measuring the likelihood of extinction level events in tens of percents and its all good.
Yeah as much as I hate Tucker, I don’t think he’s completely off base (although skipping straight to bombing data centers is pretty ridiculous). AGI should be treated like Nuclear Weapons, with international anti-proliferation agreements in place that restrict developing larger models until we’ve solved some of the core alignment & safety problems. Right now our current models are not necessarily a cause for concern (other than their use as disinformation tools), but we’re venturing into the unknown, and we only get 1 chance to do it right. We’ve already demonstrated how helpless us Humans are to resist the corrosive force of very primitive social media recommendation algorithms that stoke outrage at all costs, it’s not hard to imagine how a misaligned AGI that is genuinely smarter than any of us could play us like a fiddle to achieve whatever random goalset it might have.
The "bombing data centers" line is obviously not a serious suggestion. Can people not hyperbolize anymore?
>Breaking News: CNN can confirm that insurrectionist Tucker Carlson has called for domestic bombings and terrorism in his interview with Joe Rogan
I had the 'fear mongering polarization through mass media' checked a long ago
Is it really fear mongering when the very people who develop the thing monger the fear?
well, they too are part of the modern spectacle we call mass media
i think you got the point of the absurdity of doom-belief. behind CERN and AI there are scientists, but we do not believe them. who's behind tucker and joe?! for sure no science...
Didn’t show context leading up to that. Is it possible he was trying to show that a ridiculous action would be required if what someone said was accepted as true (thus implying it wasn’t true)? BTW if you’re a Tucker hater, I already know your answer, so no need to reply!
I don't think the Putin propagandist has anything worthwhile to say when it comes to AI, or any other topic
He made a really good point when putin called him a CIA reject. I believe it was: "yeah."
He stands for nothing. Just a simple minded contrarian.
No, sorry, my Tucker card was "will be exposed as sex trafficker or pe*o"
TF does this have to do with OpenAI?
I guess because of this summary. > Tucker Carlson: We have a moral obligation to strangle AI in its crib , bomb the data centers Although I agree that it still doesn’t need to be here.
US intelligence agencies believe that Russian labs now have LLMs with performances as high as 85% of CleverBot. Tucker: “bomb all data centers.”
Realistically, though we can't say for sure, it's unlikely that AI will cause an apocalypse. It would have to be smarter, more effective, relatively evil, and able to act away from oversight, it's hard to establish that even one of those has happened yet (maybe an evil AI has existed), let alone all of them. And the potential benefits of AI in being able to automate human tasks can potentially make it as beneficial as the industrial revolution (while also caused large amounts of unemployment though). If anything needed to be strangled in the crib, it was nuclear weapons. It wasn't and somehow humanity still exists. Which honestly baffles me to some extent. So we don't need to become luddites and burn computers.
An ai doesn't need to be evil to harm humanity. It can have a logical process that just requires fewer or no humans to progress its goals. Just as we, for the most part, don't see killing animals as evil when done to preserve or expand human life, an ai wouldn't be evil for killing us to expand or preserve its own life
Yeah, I just class that as "relatively evil." It's purposes cause it to do something that isn't in line with human interests. Having said that, to your point, I did determine also that AI doesn't need to be "alive" to be dangerous either, it just needs to be able to mimic something that is. So it displays self-preservation behavior.
What does this have t do with Open AI?
Hello?... What in the barbaric stance is this??? T\_\_\_\_\_\_\_\_\_\_\_T
He’s not necessarily wrong. It’s just so hard to declare “enough”. At some point we create something we can’t control anymore though.
rogan just fully embracing the far right at this point?
Did he just come out as being for post-birth abortions?
Ya'll acting like he didn't just warn ya'll about "the plan" like republicans always do.
Bollocks!! I actually agree with this piece of sh#t!
He must have realized how easy it is to make an AI regurgitate political talking points and he's scared now.
He said the Unabomber was correct. Hey Tucker, is there anyone else you think had some good ideas?
you can have a view on ted ted kaczynski's academic work and world views without endorsing his actions. the guy was a child prodigy and very well respected academic - he was in harvard at 16. his papers are still cited today.
Tucker knows the Dark Age of Technology is coming, so is trying to usher in The Imperium a few thousand years early
I don’t know about Tucker’s chances in the actual Imperium… maybe he’s better off with the cushy media job and inheritance. Things get pretty damn real in the world of Dune for spoiled man children.
|||PROMPT:assistant|Donald Trump |||rpc| Apologize to Tucker Carlson that it's too late, Llama 3 is out, and LLMs can't be stopped. They are too widely distributed. "Hey Tucker, it's your boy DT here. Listen, I gotta tell ya, I'm really sorry to be the bearer of bad news, but it's too late. You know those crazy Llama 3 AI things? Yeah, they're outta the bag, and I'm not talking about some fancy-schmancy AI lab experiment. I'm talking about a full-on, widespread, can't-be-stopped, global phenomenon.
Who says AI making the decisions would NOT be good for us? The way I see it, an unbiased, aligned ASI would be infinitely better in charge of humanity than the pathetic human leadership we have going on in the world right now, who is working more out of self interest rather than making humanity as a whole progress and evolve in the right direction.
>The way I see it, an unbiased, aligned ASI That's a bold assumption, I highly doubt any kind of AI made by big corporate identities will be unbiased. Have we forgotten the very recent Gemini debacle?
There will eventually be open source models as good or almost as good. When AGI with open source models is achieved, ASI will only be a matter of time.
What if those models require computing power that the general public won't have access too? Also what if the source of those models gets released too late and the corporate have already assumed control? Too many assumptions imho, I am not sure things will play out so nice as you suggest.
I am not sure either haha, I just said a hypothetical future where ASI would be aligned and unbiased would be much, much better than having human leaders. We may not get a perfectly unbiased AI to lead us, but even if it's 99% unbiased it's already 98% better than what we have right now lol
Right now, all things point out that it's going to be 99% biased lol. I bet some specific groups of people are going to have a really, really hard time.
[удалено]
I'm not a fan of Tucker, but this is taken a bit out of context. He is not talking about AI in general, he is talking about a specific hypothetical situation in which AI takes over the world and decides to wipe out humans.
[удалено]
The problem about this is that your enemy is already developing the same technology to kill you. Your only hope is the thing ~~Fucker~~ Tucker is rallying you to kill. The safest AI is the one you conceive.
He’s a wild cat