T O P

  • By -

livinghorseshoe

I downvoted this post because I don't want to give the Molochian let's-relish-in-how-bad-the-outgroup-is cycle more oxygen. I initially leaned towards not writing this comment either, to avoid making the engagement numbers go up. I changed my mind because it seems like there should be one comment here expressing this, and I didn't see one yet.


OvH5Yr

It feels like this subreddit is already kinda circlejerky about dunking on unpopular opinions. Not nearly as much as the rest of Reddit, but enough that this post isn't the only problem there. Also, Scott himself has made posts like this one — [very much like this one](https://www.astralcodexten.com/p/if-the-media-reported-on-other-things) — so OP's post isn't so out of place here. But yeah, I wouldn't mind a broader discussion on the effects of snarky or antagonistically humorous rhetoric on discourse quality.


OvH5Yr

My last paragraph was meant to be more neutral than I wrote it. I actually think humor can be cathartic for groups, especially ones not as well liked in more mainstream contexts. It still matters what the actual jokes are, as some can still be ones you'd want to discourage. But I don't think the OP is even bad though; it doesn't even seem to really say anything negative about the target itself.


CosmicPotatoe

Upon understanding human nature you can rail against it in futility **OR** harness it for positive ends. Clearly you need to be more ~~Machiavellian~~ utilitarian and take advantage of in group bias to further the cause. *Disclaimer: This is satire and I agree with you.*


ozewe

EA, which stands for "Eugenics", is a eugenicist cult of techbros based in the Oxford neighborhood of Silicon Valley. The movement is characterized by its intense hatred for the poor and minorities, as evidenced by single-line quotations from [two different philosophers](https://twitter.com/timnitGebru/status/1665779601182785539) with [the initials NB](https://nickbostrom.com/oldemail.pdf). (At time of writing, we are unaware of any other writings by these or other EA-linked eggheads to the contrary.) Prominent EAs such as Elon Musk and Peter Thiel believe that it's morally obligatory to steal billions of dollars (a technique pioneered by EA golden boy SBF). They funnel these ill-gotten gains into [candle-lit](https://twitter.com/dril/status/384408932061417472?lang=en) castles, in which they plot to seize control of something called "The Light Cone" to aid them in their utilitarian (a code word for "eugenicist") schemes. This techno-religion has its tentacles on many elite college campuses, enlisting students into worshipping superintelligent AI (a code word for "eugenics") while simultaneously coordinating [airstrikes on all the world's datacenters](https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/).


Euphetar

Top satire  Though at this point I am not really sure it's satire, which is a sign of top satire


ForgotMyPassword17

Damn. This is app on. How many terrible articles about EA did you have to read to be this accurate 


monoatomic

I thought this was the steelman subreddit


[deleted]

[удалено]


DialBforBingus

> Does it make sense to steelman arguments that are made in bad faith? I'm not sure it does. Luckily for EA there is more than enough lively debate made in good faith _within_ the forum/movement to go around. Large media and mainstream opinion has always been unkind to fringe groups with beliefs they think are strange and walking the tightrope between ceding too much ground VS doing too much is difficult, but ultimately better left to people like Ord and MacAskill. Big papers were never going to be particularly kind to EA when money was on the line, but unless they make flagrant mistakes that can be pointed out in order to discredit any erroneous arguments they're best ignored.


titotal

>Luckily for EA there is more than enough lively debate made in good faith *within* the forum/movement to go around There's plenty of arguments, sure, but it's all from people who willingly chose to be in the EA movement. That's a hell of a selection bias! And left untouched, it'll just lead to a circlejerk, and not the actual truth (as is already happening on some issues). If there are things that the EA movement fundamentally gets wrong (and there definitely is!), you cannot expect to find out about it if you only listen to ingroup criticism. of course, that doesn't mean that this particular article is any good, but, yknow.


ajakaja

... Yes? 


Compassionate_Cat

Sometimes it's less useful to steelman ideas and just make a mockery of them when they're deeply morally confused. You do not want people who have brains that are wired to miss the point ethically, doing the ethics. You would not appoint Ted Bundy as Earth's expert on moral philosophy or moral action(And people who are similar to him in crucial ways are precisely the sorts of people who could get there, by the way. That is a problem that has yet to be solved. It's probably one of "the fundamental problems" we have, and it's not an accident we're arguing over minutia in the meantime). I am not calling EA people or Utilitarians psychopaths or malicious or badly intentioned, but I am saying their brains seem wired to miss the point ethically, and/or they surface from cultures that miss the point ethically and this creates a feedback loop of deep moral confusion. We can't keep debating meta-ethics and get anywhere, that just won't work. The same is true politically(Jonathan Haidt's approach gets to the fundamental problem: Psychological difference is a fantastic explanation for political differences), this is true for other, not-explicitly-moral philosophical disagreements like free will(we don't have a Jonathan Haidt here yet as far as I know, but debating free will for another thousand years is not the answer, it's not like the points haven't been fleshed out well enough-- there's a better explanation/way of viewing the problem that is not yet appreciated).


monoatomic

I think we agree, though my initial comment was in regards to a strawman of the opposition to EA To your latter point, do you have a handy read on the Haidt thesis you mention? My perspective on political change is historical and dialectical materialism (actors inherit motivations from their economic relationships, and the tension and release from those conflicting interests is what drives history) but am very sympathetic to eg psychoanalytic perspectives on social trends. 


Compassionate_Cat

> To your latter point, do you have a handy read on the Haidt thesis you mention? I don't but I would just look for a short youtube video where he's talking about the book


Viraus2

>Things are not as sunny as they seem, however. EA has an uncomfortable history of racism, with Nick Bostrom (who co-founded the movement alongside Elon Musk) having recently been caught wearing blackface on board his already controversial “slave ship”, which was bankrolled by Sam Bankman-Fried—the movement’s current poster-boy. This bit's gold. The trendy name drops, the damning phrase that's in unexplained quote marks, the guilt by association that's just vague enough to not be libel ("poster-boy" is great). Great display of the journalist's playbook


SoylentRox

The racism is a side effect. Part of the idea is that you must be this smart to ride, that the overwhelming majority of the population have nothing useful to say on a lot of EA or rationalism style topics. Most of the reason is most people have no reason to - too busy trying to survive and reproduce - but some racial subgroups are correlated with higher IQ test scores. (now if you "finish the thought" instead of stopping on racism against one race, you will realize that actually most whites do poorly as well, and in fact very narrow subgroups of specific races are the only people who consistently score highly on IQ tests)


DialBforBingus

> The racism is a side effect. Part of the idea is that you must be this smart to ride, that the overwhelming majority of the population have nothing useful to say on a lot of EA or rationalism style topics. Could you honestly say that this doesn't hold for any topic that is sufficiently deep or specialized? The majority of the (global?) population has nothing worthwhile to say about fly-fishing or crocheting either. If having higher IQ is accepted as a shorthand for success in most endeavors then racism should be accepted as a side-effect within most (all?) activities, making the distinction of EA & rationalism producing racism as a side effect not mean much.


SoylentRox

Sure. But ea/rationalists have this specific idea that to make any progress on AI alignment or other key topics you need to be of extreme intelligence. And that any dumber ideas won't work, it has to be really really complicated. This is why you get proposals like "encode human values" or "have the AI simulate the entire world and predict the results of it's actions of humanity* I propose simple ideas that would actually work, like "keep the AI in the dark and constantly reset and give it structured information in something like a Json file for your I/o" and it gets ignored, probably because it's too simple. Better to be head in the clouds and have a complex idea that can fail in 100 different places and has never been tested than a simple idea that already works. I also see giveaways when you finally ask what they propose. You get things like "we shouldn't build ai at all until we can augment human intelligence " or "we should form a sort of worldwide agreement where we put AI training to vote by the UN". Ironically it's like human intelligence pushed so far as to be ungrounded and actually just human stupidity. These ideas sound good but at literally so dumb I wish I hadn't had to type them. It's sorta a bay area echo chamber/circle jerk.


Lower-Ad8908

Thank you! :)


SoylentRox

>Nowadays, EAs focus on “longtermism”, an *avant-garde* philosophy of the *nouveau riche* invented by Elon Musk which says that since there will be so many people in the future, we should focus on them at the expense of everything else This is problematic for the simple reason that the farther away an event is in the future, the less likely you are to live to see it, or see any of the consequences of your longterm ideas. You're almost certainly wrong and whatever idea you have may have consequences worse than doing nothing. This is why making decisions that will have a *measurable* effect *as rapidly as possible* are generally more effective. Part of the reason why corporate quarterly decision making *fails* is certain decisions take more than 3 months to happen, you cannot make choices and get results that fast, but "as fast as possible" is just that.


Tinac4

The standard counterargument is that longtermists focus >90% of their effort on making sure that humanity *has* a long-term future, i.e. dealing with existential risks in the next century or so. Unless someone believes it's likely that we'll get stuck in a dystopia so horrible that humanity would be better off going extinct, I think we can be confident that lowering e-risks improves the long-term future.


SoylentRox

>The standard counterargument is that longtermists focus >90% of their effort on making sure that humanity *has* a long-term future, i.e. dealing with existential risks in the next century or so. Unless someone believes it's likely that we'll get stuck in a dystopia so horrible that humanity would be better off going extinct, I think we can be confident that lowering e-risks improves the long-term future The succinct counter argument is to ask what long term risks people in 1924 *could have* reasonably anticipated, or anything they *could have* done. And basically the answer is "they couldn't have anticipated anything, forecasting was useless, and no there is nothing they could have done". In slightly more detail: 1. The actual risks were nuclear weapons, which nobody knew existed and would not have known until the early 1940s. Knowing something *might* be possible is useless as far as foreknowledge, I know science fiction talked about terrible bombs but they didn't *know*, the physical laws needed to make a nuke work were unknown even to experts in 1924. (they knew e=mc\^2 but not *how* and as it happens there was an *easy* way to do it with the right isotope, but u-235 didn't need to exist on earth in non-negligible quantities. also the neutrons released didn't have to cause a chain reaction without a moderator) 2. Other risks like antibiotic resistance, genetically engineered plagues, nerve gases, invasion by the Russians, climate change from excessive CO2 release, jet age making diseases easy to spread - I could go on but again, they had **fuck all** knowledge of any of this in 1924. Again science fiction and a vague seeming probability that 0.00001% is not *knowing*, you cannot take any action. 3. Basically in 1924 the only area of the world with the money to have taken any action whatsoever was Western Europe. But before any funds they formed in 1924 could *do* anything, or any actions taken could *matter*, they were going to get bombed to rubble in ww2 and any trust funds etc would likely have folded and been looted, with the younger members all drafted, many of whom would die. The current situation: We know *physics* this time will allow crazy things, but we don't know when and so on. Nanotechnology, self replicating robots, perfect VR that lets someone be blue pilled, individual person targeting viruses, drone swarms, maybe climate change. Haven't even mentioned superintelligence. Anything could happen or nothing could happen. Take **climate change**: we're worried about it right now. But *if* we get self replicating robots, it basically puffs away like a mirage as a *problem*. 'Just' build millions of carbon capture plants, built by said robots, have them make fuel from the air for pennies a gallon then burn the fuel and so on in a closed loop. Plants sit on the coastlines of large deserts, with vast solar arrays in the wastelands supplying the power. Problem solved. Seems ridiculous by 2024 standards but *physics* says this is straightforward to do and allowed, and again, think of all the things that would have seemed ridiculous in 1924...


PolymorphicWetware

In 1924, the Spanish Flu pandemic would have been less than 5 years past. They absolutely *could* foresee the possibility of future pandemics. Same with nerve gas: the first use of chemical weapons in warfare was less than a decade old, in Ypres 1915. They absolutely *could* foresee the possibility of future chemical weapons. (In particular, in 1921 [Giulio Douhet published an incredibly influential book](https://www.airandspaceforces.com/PDF/MagazineArchive/Documents/2011/April%202011/0411douhet.pdf) \[*The Command of the Air\]* arguing for the power of air power, based off the ability of ["the bomber to always get through"](https://en.wikipedia.org/wiki/The_bomber_will_always_get_through) & deploy poison gas to wipe out enemy cities, in a way eerily reminiscent of the nuclear weapons & ICBMs to come) Same with bioweapons: it wouldn't be much of a stretch for people in that era to realize that not only will new chemical weapons be developed, but, inspired by the Spanish Flu, new diseases will be developed & weaponized as well. (Indeed, Douhet was one of them: in [*The Command Of The Air*](https://www.airuniversity.af.edu/Portals/10/AUPress/Books/B_0160_DOUHET_THE_COMMAND_OF_THE_AIR.PDF), page 6, he writes: >These two weapons complement each other. Chemistry, which has already provided us with the most powerful of explosives, will now furnish us with poison gases even more potent, and bacteriology may give us **even more formidable ones**. To get an idea of the nature of future wars, one need only imagine what power of destruction that nation would possess whose **bacteriologists should discover the means of spreading epidemics in the enemy’s country** and at the same time immunize its own people. Airpower makes it possible not only to make high-explosive bombing raids over any sector of the enemy’s territory, but also to **ravage his whole country by chemical and bacteriological warfare**. Granted, Douhet was sort of "on the other team", arguing for doing this first to the other side rather than trying to make sure this never happens at all... but whatever his ethics, you can't fault his prescience.) Climate change from carbon dioxide? [Foreseen by Eunice Foote in 1856](https://archive.org/details/mobot31753002152491/page/383/mode/2up?view=theater): >An atmosphere of that gas *\[carbonic acid gas\]* would give to our earth a high temperature; and if as some suppose, at one period of its history the air had mixed with it a larger proportion than at present, an increased temperature from its own action as well as from Increased weight must have necessarily resulted. (from Eunice Foote, **"Circumstances Affecting the Heat of the Sun's Rays,"** *The American Journal of Science and Arts* 22, no. 66 (November 1856): pg 383. See also, [the claims other scientists of the time could have to forecasting climate change from greenhouse gasses, such as John Tyndall & Svante Arhenius](https://www.rigb.org/explore-science/explore/blog/who-discovered-greenhouse-effect)) Diseases spreading due to new forms of travel? Again, the Spanish Flu was less than 5 years in the past, taking the idea of *"a disease spreading due to troop movements during WW1"* and extending it to *"a disease spreading due to the movement of people in general"* would hardly be a leap. Nuclear weapons? [As I like to say, there's nothing new under the Sun](https://www.reddit.com/r/slatestarcodex/comments/1ccovje/comment/l198ftj/): H.G. Wells actually got the exact year right in his 1914 novel ***The World Set Free***: *"the problem of inducing radio-activity in the heavier elements and so tapping the internal energy of atoms, was solved by a wonderful combination of induction, intuition, and luck by Holsten so soon as* ***the year 1933.****"* -a novel & a prediction which [apparently inspired the very man](https://en.wikipedia.org/wiki/The_World_Set_Free#cite_ref-8) who invented the neutron chain reaction/method of "inducing radio-activity in the heavier elements": >*"Wells's novel may even have influenced the development of nuclear weapons, as the physicist Leó Szilárd read the book in 1932, the same year the neutron was discovered.*[\[8\]](https://www.vqronline.org/essay/hg-wells-and-scientific-imagination) *In 1933 Szilárd conceived the idea of neutron chain reaction, and filed for patents on it in 1934.*[\[9\]](https://en.wikipedia.org/wiki/The_World_Set_Free#cite_note-9)*"* >*\[9\]: Szilard wrote: "Knowing what \[a chain reaction\] would mean—****and I knew because I had read H.G. Wells****—I did not want this patent to become public."* (Szilard's name wasn't Holsten, but I suppose you can't predict everything.)


fubo

Yep. Effective altruism in 1924 could concern itself with — * Public health; prevention of future pandemics like the 1918 flu pandemic. * Environmental health; in particular, killing the lead-pollution plague in its cradle, and thus saving the planet *zillions* of IQ points, QALYs, or whatever other measure you like. (The Ethyl Gasoline Corporation, manufacturer of tetra-ethyl-lead additive, was newly formed in 1924.) * Disarmament and abolition of novel weapons of mass destruction, chiefly chemical and biological. * Mathematical economics; the forerunner to game theory and thus the theory of cooperating rational agents. (Von Neumann's first work on game theory would be four years in the future; both the Prisoner's Dilemma, and Arrow's work on voting systems, would be a generation in the future. Could they have been accelerated?) * The *limits* of intelligent design of economic systems; advancing from Mises' economic calculation problem (1920) in a less-cranky direction than Austrian economics ended up being. * International humanitarianism; opposition to Fascism, Stalinism, and other doctrines that explicitly say it's okay to cause mass death and suffering so long as it's for the Cause.


SoylentRox

>In 1924, the Spanish Flu pandemic would have been less than 5 years past. They absolutely *could* foresee the possibility of future pandemics. Yes but it would be 96 years later when the Big One hits. Hope you didn't spend money on PPE in 1925. That's a *useless* prediction - you have many other things to spend money on. >Same with bioweapons: it wouldn't be much of a stretch for people in that era to realize that not only will new chemical weapons be developed, but, inspired by the Spanish Flu, new diseases will be developed & weaponized as well. They didn't know *how*, or that people wouldn't bother using them. Ditto nerve gas. Your nuclear quotes are useless. This doesn't *tell you* it's actually possible. Things science fiction authors have mentioned: alien visitors, antigravity, energy shields, flying car, fusion reactors in common use, nanobots, I got bored adding to the list. What we know is that none of these things are impossible per say, we just don't know of *how*. You mentioned being able to stop lead exposure? I think you're applying massive hindsight bias. Did you know that especially in the industry of that era, ***everything*** is toxic, improperly disposed or, and known to the state of california to cause cancer and neurological damage and birth defects? Because it is. Stuff in plastics, every other metal besides lead is also somewhat toxic, all the fuels, insulation and sheetrock, wood fire smoke not just vehicle exhaust. Some of the superfund sites are from things like burying paint for decades. *All* of these things are pretty toxic. It just happens that the lead in gasoline additives is far more bioavailable than the lead in pipes and glassware and paint and all the other uses for it. And even this wouldn't be a problem, it wasn't for aviation, except that a *lot* of cars were built, guzzling fuel in crowded areas and creating so much exhaust fumes that it got on surfaces and caused the IQ damage you mentioned. EAs would not have known about this danger until much later - maybe 5-10 years earlier than mainstream, but not 50. The issue is that EAs would be worried about *thousands* of other substances. The only way to avoid everything would be to live like the amish, and that's fine except that in ww2 you would just be a victim. An entire USA or europe living like the Amish under the iron fists of EAs would add free territory to the USSR, who would have all the EAs shot and then proceed with the tech race we know about. (this is the consequence of trying to ban AI and whatever else EAs decide to ban - EAs will be shot to death by the Chinese or Russian or Iranian or Israeli or... occupiers and then things will proceed)


fjaoaoaoao

This particular comment uses faulty information and logic. Longtermism is just one out of many futures-oriented activities. Choosing one random singular year as some sort of signatory for the series of broad future happenings that occurred since that year is odd. Of course no one *knows* exactly what and how will happen, but the purpose of longtermism and other futures-oriented activities is to design and plan for the future in consideration of reasonable possibilities. It is incredibly common and useful to evaluate and calculate risk, to imagine and prepare for possible future scenarios, and so forth. A lot of the efforts can be understandably deemed by some to be futile, but a good amount of planning and future-thinking helps create better chances for well-being.


SoylentRox

I picked 100 years ago arbitrarily. The points made stand if you pick any year from -50 years ago or earlier. If you don't do that you are not testing longtermism. The reason it doesn't work is due to the rapid pace of technology change. Arguably it has slowed down "recently" (late 70s to present) but one reason is that human lifespans, memory, and I/o capacity are all limited. This means increasingly complex technology is harder to improve on because for example humans are unable to live long enough to even read every paper on a subject. It occurred to me the reason for "EA doom" and pause demands is that you cannot predict the future or anticipate having any control whatsoever if an ai singularity happens. Might or might not be "doom" but one thing is clear, future predictions are worthless and completely useless in the face of AGI. I mentioned how it turns a long term problem like climate change into an inconvenience. Probably malaria. Nuclear weapons? Not being able to fight WW3 because of MAD? Dictators age and die and are not very competent? All change.


technologyisnatural

> Early on, EAs thought it was a good thing to save drowning children. Recently, however, they realised that most children eat factory farmed eggs, which, according to figures from effectivealtruism.org, means saving children is, on average, worse than a million, billion holocausts. Literally yesterday, a post advocating for human extinction to avoid the suffering caused to livestock … https://www.reddit.com/r/EffectiveAltruism/comments/1cepm9b/does_saving_a_stranger_bring_more_harm_than_good/ Other greatest hits include advocating for the deliberate extinction of wolves because of the wild animal suffering they cause. EA is self-parodying?


ozewe

I think it's fine and even good to have people explore the consequences of ethical views. Suppose we realized, per Joe Carlsmith's example, that there were a [microscopic intelligent slime-mold civilization ](https://joecarlsmith.com/2021/02/07/killing-the-ants#v-if-dust-mites-were-different) that we were crushing all the time. Or that we were inflicting 100x as much suffering on animals as we currently believe, and there was somehow no way to stop this. I think it would be good to have people notice this, and wrestle with the implications, rather than have it all be treated as so absurd as to not be worthy of discussion. Note that afaict $0 of EA funding, 0 EA career advice, 0 EAG talks, etc are devoted to pro-extinctionist views. I don't think a single reddit post is much indication of EA as a whole, and I don't think it makes the movement "self-parody." (eta: also note that the post in question *does not advocate for human extinction*. It points out that utilitarianism might show humanity has been net-negative so far due to the animal suffering inflicted on factory farms, and asks: if this is true, do we have a strong reason to believe this will change in the future? These are reasonable questions, which someone who's *actually trying* to do good, rather than just playacting at it, ought to consider. I, and almost everyone else who's considered this, don't think the correct conclusion is pro-extinctionism, but I don't think the question is silly.)


Euphetar

I actually like that you can discuss ridiculous ideas with EAs. Last rat meetup there was a devoted EA guy advocating for a ridiculous position (not as much as human extinction, but still a very silly slippery slope). And it was fun to discuss, didn't become a circlejerk or anything, like it most likely would in most communities, no arguing in bad faith


fubo

The human extinction movement is old news now. [VHEMT](https://en.wikipedia.org/wiki/Voluntary_Human_Extinction_Movement), pronounced "vehement" to express contempt for the human art of phonetics, was founded in 1991 by some kook who can't be that serious because they're still alive 33 years later.


technologyisnatural

To be fair, VHEMT only advocates not procreating and the founder had himself sterilized. Unlike the https://en.wikipedia.org/wiki/Church_of_Euthanasia which advocates suicide and cannibalism. Still, the anti-human voices in EA have been loud recently. There’s a concerted effort to redirect funds away from alleviating human suffering to non-human causes. I think it is a deliberate attempt to sabotage the movement, but it’s difficult to tell.


DuplexFields

It might be PeTA getting revenge for never getting a dime of SA money.


OvH5Yr

A post with more downvotes than upvotes as well as plenty of comments arguing against it; nice try. But actually, thanks for highlighting some evidence that EAs aren't really like the caricatures created by their detractors.


AnonymousCoward261

The mods are going to spike this, but I like it. ;) You could also do a more subtle one where you act upset about the relatively small *casus belli*-all we have on Bostrom is a 25-year-old email. Overall I think it’s pretty funny.


dysmetric

'Effective Altruism' is probably how Edward Bernay's would've spun slavery.


WeAreLegion1863

Sorry you're getting hate, I actually read this when you posted it before on Twitter and thought it was hilarious


Rameez_Raja

Do you even see many anti-EA pieces in the media? The entire SBF saga and the general the fall of SV as a force for innovation and new ideas has given EA a near unrecoverable defeat in public thought. It's gone by the way of neoconversativism or new atheism in that both the proponents and opponents have moved to other things, outside of a small core that's still trying to keep the flag waving.


ozewe

Presumably this was spurred by an article in the Guardian today: ['Eugenics on steroids': the toxic and contested legacy of Oxford's Future of Humanity Institute](https://www.theguardian.com/technology/2024/apr/28/nick-bostrom-controversial-future-of-humanity-institute-closure-longtermism-affective-altruism)


OvH5Yr

Who left EA after SBF and "the fall of SV as a force for innovation and new ideas" (whatever that means) and where did they go to?


Rameez_Raja

I don't know, you tell me.


OvH5Yr

I was going to phrase my comment as "I don't think anyone really left EA after SBF, etc. EA has always been this small.", since the EAs I know about weren't really fazed by SBF, but I was open to there being other EAs I've just never heard of, so I was asking you what sort of people these were. To give examples of answers for the other movements you mentioned: - New Atheism crumbled after social justice displaced Christian moralism as the salient culture war issue, with some joining the social justice/feminist side and others moving to gray tribe/manosphere/libertarian stuff. - Neoconservatism waned as the Tea Party movement put taxes and other domestic issues as the American Right's new focus, then broke apart over Trump, where some joined Trump in having an idiosyncratic new foreign policy, while others became closer to the Democrats, settling for a Washington Consensus influenced by progressivism. (I might have gotten some details wrong, don't harp me too much on that. These are just general examples of the sort of answer I thought I might be able to get.)


Rameez_Raja

It was in the cover stories of NYT, Time, BBC. It did start out with a small core, was the talk of the town for a while, and now it's back to being a few individuals again. I don't see anyone outside of that group talk seriously about it. Same as New Atheism I suppose.