It’s odd for someone as clearly intelligent and well read as Sacha to be also so clearly blind to her own hypocrisy and the irony of that. First off, I have no skin in the AI game. If anything I’m obsessed with analog and tools from that time period. I read only hard copy books, whatever. But to say that basically only people who have money invested in AI have anything good to say about it, while shamelessly peddling her Blue sky project, was ironic. To claim that twitter, before Elon took over, did not selectively censor stuff is insane. Before Elon took over it was equally as dumpster fire-y as it is now, just on the opposite side of the spectrum. Everyone knows this. And then to suggest that Substack should selectively censor certain people with certain views. Like how is that irony lost on her?? Basically having the utmost confidence in her own views and not censoring them, while having the utmost confidence in what should be censored. History, a million times over, shows us that this is a slippery slope indeed. We’re all subject to our own biases, but it’s crazy how deeply bias and seemingly unaware of their own biases some people are. This interview was the textbook definition of that. She had lots of interesting things to say tho and lots of good ideas, but there were times when that irony was a bit too much to handle. And she was either unaware of it or too confident in her own views to care.
I also agree that she seems like a smart person, but sadly I had the same reaction you did. How is it lost on her? She's unaware of the way that her own ideology is motivating her arguments, like many on the left (and to be fair like many on the right also). Specifically, she's unaware of the moral assumptions she's making when she says things like, "hateful."
There's plenty evidence of pre-Elon progressive bias on Twitter. Not only were the owners openly progressive, but many of their moderation rules were based on progressive ideology (like misgendering). Many people were kicked off unfairly for violating these sorts of rules, from high profile people like Jordan Peterson to many other lower profile people.
What's funny is that if there were a conservative social media platform that was kicking people off for, say, criticizing Jesus she would lose her mind. Hence my point above about ideology.
Josh, I expected more substantive pushback from you. That's why I subscribe to your paid podcast, for the sanity checks you provide people who say demonstratively wrong things.
I think he did try to push back but didn’t get much purchase because she was so self-assured. There’s not much you can do if the guest you’re talking to is living in a different reality
The critical issue people are missing about this conversation is Josh’s blatant misinformation about email folders. Sticking stuff in folders is still WAY better overall than relying on search to somehow find the thing you remember nothing about other than it exists. Fight me!
I have a rich set of gmail tags that act as folders which I set up ten years ago but when I need to find an email I almost never use them because search is good enough
When Josh was talking about his positive experience with early Twitter and Sacha said something like, "Would the experience have been different if you were a woman or person of color?" I was pretty sure I'd be able to predict her positions on almost everything else they talked about. Sadly, I was pretty much correct. It's not that lefties these days are bad people or insincere necessarily; they're just predictable and, consequently, boring.
Conveniently she left out the famous study showing that women experience less online harassment then men (and mostly from other women), so the logical answer to this question would have been "if I were a women, I probably would have been fine".
My take as well. All disagreements aside we might as well have been listening to a tape machine with somebody reading out a list of the top 20 left wing twitter memes from 2015. We've heard it a million times.
Can we please have a link to this research that definitively established that both sides been treated equally? Please, please, please... I'm fascinated (if it is true) with the methodology they used to do it.
NYU REPORT: Please, correct me if I looked at a wrong research. I don't believe this research says what you think it says. Quote (Timestep Approx: 1:05) "...there is research that shows that moderation decisions were affecting all ends of the political spectrum. That right-wing, alt-right, whatever labels you want to apply, voices were not being targeted and were not subject to moderation decisions at any greater rate than any other..." NYU report states that Conservatives are making does not have evidence, it does not provide any study on moderation. NYU report suggests that conservative content is not being removed for ideological reasons and that searches are not being manipulated to favor liberal interests, but does not provide evidence either. In one part it uses Engagement Metrics: The report uses data from sources like CrowdTangle and NewsWhip to show that right-leaning Facebook pages and media outlets often have high levels of user engagement, suggesting they are not being suppressed. This engagement metrics data is a very poor argument for "...voices were not being targeted and were not subject to moderation decisions at any greater rate than any other..." I think the conclusion is still the same: we don't know if one side was heavier moderated than the other.
All the evidence presented in this Hill article (as far as I saw) comes directly from MediaMatters, which is a known far-left organization. To the extent that there is data, it should be taken with a grain of salt considering that an organization like that is an interested party with a clear stake in the idea that conservatives are wrong about this issue.
Sadly, this is precisely what people like Megyn Kelly do on the right when they selectively site sources to back their dubious claims (as Kelly has done recently with Abrego Garcia).
To be fair, it doesn't mean Sacha is wrong per se, but nor does it mean Kelly is wrong. But it does mean additional sources are needed without such a strong bias. If anyone has those sources, please link them. Also I have to admit that I didn't look at EVERY link, so maybe I missed something. But all the linked evidence in the first half of the article links to MediaMatters.
Ok this is more of a polemic than a report. It spends half of the text bashing Trump and conservatives. And here's one of the key points that shows the unreliability of the conclusions drawn:
"... the right spreads more content that violates platform rules than the left. In light of this discrepancy, it stands to reason that right-leaning content would face labeling, demotion, or removal more frequently than left- leaning content."
The point is that some of THE PLATFORM RULES ARE THEMSELVES POLITICALLY BIASED TO THE LEFT! Or they were at that time anyway.
This report was also done in 2021 and it notes that much can't be assessed since at the time Twitter didn't release all its data. But in 2022 the Twitter Files hit and the data was released, which showed uncontested example of anti-conservative bias, including the shadow banning of Jay Bhattacharya as one of the higher profile cases.
This is the report that I was talking about. I'm trying to steal man the argument and find the best study/report. But this was the strongest among all mentioned (I think).
The content moderation algorithms are a separate issue from the direct government involvement in what is allowed and promoted. The Twitter files showed it wasn't just algorithms but management involvement that was used.
I was disappointed with the guest’s views on echo chambers, and her self-awareness on the possibility she might not be right on every culture issue felt ironically lacking, given her so-called bona fides. From transgender to race, she thinks she has it all figured out and won’t tolerate any dissent because apparently they’re all Alex Jones.
Yes, she was clearly the progressive-chauvinist in the room and got very impatient with Josh for daring to push back on the views within her echo chamber. For someone so smart it is sad to see such a paucity of self-awareness
Verifying information sources is nowhere near as simple as it sounds.... The AP as one example is a "news" organization which has completely lost its way, is extremely partisan and poor quality in its reporting and analysis.
Also, who still has faith in the "trust and safety" teams from the social media companies, during COvid times? Also, isn't it true that BlueSky has become extremely toxic?
Cmon Josh, you ask great questions but sometimes, you miss the obvious ones!
Yeah I guess they are trying to walk the tightrope between investing on product development too little (maximising short-term profit but risking falling behind their competitors in model development) and too much (incurring massive losses while staying ahead of the competition)
Not sure if this has been pointed out, but Re: the root/pervasiveness of litter trays in schools for students identifying as animals:
I believe there are schools in the USA who, as part of lock down procedures for active shooters, keep a bucket of cat litter in each classroom. If the school was locked down for many hours, the people in the classrooms would have a makeshift toilet to use.
About training data for LLMs, there’s still a huge amount of non-public data that will find its way into training, whether we like it or not. Moreover, there’s strong evidence synthetic data can continue to be generated and used. That’s how Alpha Zero became so good and unpredictable at Go – not by training on past human games, but by generating billions of new games and learning what works as it continues to tweak the underlying parameters.
As well as data, there’s investments in algorithmic improvements (as Josh mentioned) such as the reasoning models used by ChatGPT o1, o3 etc and DeepSeek and there’s an enormous investment on hardware and data centres that will still fuel exponential growth. Whether those turn out to be good investments, no-one knows, but it’s a certainty they’ll improve the capacity and resources available for future LLM training and usage.
Great content, agreements and laughter, disagreements and push back. Josh poked a few holes in some of Sacha’s views and she engaged and pushed back herself.
The ‘nazi funding’ disagreement is a perfect example of a very ‘Szeps’ uncomfortable conversation
The times that she laughed was out of disrespect for Josh's opinion. She was Not interested in entertaining other points of view. Her opinion of Anyone having a sub stack account is intentionally funding. Nazis is patently ridiculous
Hatred of AI and substack was very odd.
It was her condescending division of his preference for Substack, that I found the most irritating
Seemed kinda extreme
Even odder when you contrast it with all this Bluesky talk.
Everyone
Very curious to find out who falls under Sacha’s nazi umbrella.
It’s odd for someone as clearly intelligent and well read as Sacha to be also so clearly blind to her own hypocrisy and the irony of that. First off, I have no skin in the AI game. If anything I’m obsessed with analog and tools from that time period. I read only hard copy books, whatever. But to say that basically only people who have money invested in AI have anything good to say about it, while shamelessly peddling her Blue sky project, was ironic. To claim that twitter, before Elon took over, did not selectively censor stuff is insane. Before Elon took over it was equally as dumpster fire-y as it is now, just on the opposite side of the spectrum. Everyone knows this. And then to suggest that Substack should selectively censor certain people with certain views. Like how is that irony lost on her?? Basically having the utmost confidence in her own views and not censoring them, while having the utmost confidence in what should be censored. History, a million times over, shows us that this is a slippery slope indeed. We’re all subject to our own biases, but it’s crazy how deeply bias and seemingly unaware of their own biases some people are. This interview was the textbook definition of that. She had lots of interesting things to say tho and lots of good ideas, but there were times when that irony was a bit too much to handle. And she was either unaware of it or too confident in her own views to care.
I also agree that she seems like a smart person, but sadly I had the same reaction you did. How is it lost on her? She's unaware of the way that her own ideology is motivating her arguments, like many on the left (and to be fair like many on the right also). Specifically, she's unaware of the moral assumptions she's making when she says things like, "hateful."
There's plenty evidence of pre-Elon progressive bias on Twitter. Not only were the owners openly progressive, but many of their moderation rules were based on progressive ideology (like misgendering). Many people were kicked off unfairly for violating these sorts of rules, from high profile people like Jordan Peterson to many other lower profile people.
What's funny is that if there were a conservative social media platform that was kicking people off for, say, criticizing Jesus she would lose her mind. Hence my point above about ideology.
People like her are often wrong, but never in doubt
She was often wrong, but never in doubt
Josh, I expected more substantive pushback from you. That's why I subscribe to your paid podcast, for the sanity checks you provide people who say demonstratively wrong things.
I think he did try to push back but didn’t get much purchase because she was so self-assured. There’s not much you can do if the guest you’re talking to is living in a different reality
My exact thoughts - I guess Josh’s plan when this happens is to move on and trust that the listener figures out what’s going on 🙂
That's a good point
The critical issue people are missing about this conversation is Josh’s blatant misinformation about email folders. Sticking stuff in folders is still WAY better overall than relying on search to somehow find the thing you remember nothing about other than it exists. Fight me!
I have a rich set of gmail tags that act as folders which I set up ten years ago but when I need to find an email I almost never use them because search is good enough
I also shamefully admit to using email folders!
When Josh was talking about his positive experience with early Twitter and Sacha said something like, "Would the experience have been different if you were a woman or person of color?" I was pretty sure I'd be able to predict her positions on almost everything else they talked about. Sadly, I was pretty much correct. It's not that lefties these days are bad people or insincere necessarily; they're just predictable and, consequently, boring.
Conveniently she left out the famous study showing that women experience less online harassment then men (and mostly from other women), so the logical answer to this question would have been "if I were a women, I probably would have been fine".
My take as well. All disagreements aside we might as well have been listening to a tape machine with somebody reading out a list of the top 20 left wing twitter memes from 2015. We've heard it a million times.
Can we please have a link to this research that definitively established that both sides been treated equally? Please, please, please... I'm fascinated (if it is true) with the methodology they used to do it.
This is an old article, but refers to a bunch of the studies I was referencing: https://thehill.com/opinion/technology/440703-evidence-contradicts-right-wing-narrative-of-tech-censorship-and-bias/
NYU REPORT: Please, correct me if I looked at a wrong research. I don't believe this research says what you think it says. Quote (Timestep Approx: 1:05) "...there is research that shows that moderation decisions were affecting all ends of the political spectrum. That right-wing, alt-right, whatever labels you want to apply, voices were not being targeted and were not subject to moderation decisions at any greater rate than any other..." NYU report states that Conservatives are making does not have evidence, it does not provide any study on moderation. NYU report suggests that conservative content is not being removed for ideological reasons and that searches are not being manipulated to favor liberal interests, but does not provide evidence either. In one part it uses Engagement Metrics: The report uses data from sources like CrowdTangle and NewsWhip to show that right-leaning Facebook pages and media outlets often have high levels of user engagement, suggesting they are not being suppressed. This engagement metrics data is a very poor argument for "...voices were not being targeted and were not subject to moderation decisions at any greater rate than any other..." I think the conclusion is still the same: we don't know if one side was heavier moderated than the other.
All the evidence presented in this Hill article (as far as I saw) comes directly from MediaMatters, which is a known far-left organization. To the extent that there is data, it should be taken with a grain of salt considering that an organization like that is an interested party with a clear stake in the idea that conservatives are wrong about this issue.
Sadly, this is precisely what people like Megyn Kelly do on the right when they selectively site sources to back their dubious claims (as Kelly has done recently with Abrego Garcia).
To be fair, it doesn't mean Sacha is wrong per se, but nor does it mean Kelly is wrong. But it does mean additional sources are needed without such a strong bias. If anyone has those sources, please link them. Also I have to admit that I didn't look at EVERY link, so maybe I missed something. But all the linked evidence in the first half of the article links to MediaMatters.
And here are further subsequent studies: https://www.theguardian.com/media/2021/feb/01/facebook-youtube-twitter-anti-conservative-claims-baseless-report-finds
Ok this one is better because it's actually a report from NYU! I'm still skeptical based on what Nikita said above, but I'll take a deeper look.
Ok this is more of a polemic than a report. It spends half of the text bashing Trump and conservatives. And here's one of the key points that shows the unreliability of the conclusions drawn:
"... the right spreads more content that violates platform rules than the left. In light of this discrepancy, it stands to reason that right-leaning content would face labeling, demotion, or removal more frequently than left- leaning content."
The point is that some of THE PLATFORM RULES ARE THEMSELVES POLITICALLY BIASED TO THE LEFT! Or they were at that time anyway.
This report was also done in 2021 and it notes that much can't be assessed since at the time Twitter didn't release all its data. But in 2022 the Twitter Files hit and the data was released, which showed uncontested example of anti-conservative bias, including the shadow banning of Jay Bhattacharya as one of the higher profile cases.
Swing and a miss.
This is the report that I was talking about. I'm trying to steal man the argument and find the best study/report. But this was the strongest among all mentioned (I think).
The content moderation algorithms are a separate issue from the direct government involvement in what is allowed and promoted. The Twitter files showed it wasn't just algorithms but management involvement that was used.
I was disappointed with the guest’s views on echo chambers, and her self-awareness on the possibility she might not be right on every culture issue felt ironically lacking, given her so-called bona fides. From transgender to race, she thinks she has it all figured out and won’t tolerate any dissent because apparently they’re all Alex Jones.
Yes, she was clearly the progressive-chauvinist in the room and got very impatient with Josh for daring to push back on the views within her echo chamber. For someone so smart it is sad to see such a paucity of self-awareness
Verifying information sources is nowhere near as simple as it sounds.... The AP as one example is a "news" organization which has completely lost its way, is extremely partisan and poor quality in its reporting and analysis.
Also, who still has faith in the "trust and safety" teams from the social media companies, during COvid times? Also, isn't it true that BlueSky has become extremely toxic?
Cmon Josh, you ask great questions but sometimes, you miss the obvious ones!
Oh man… webrings, Digg, 2007 Twitter- this made me so nostalgic for old school internet
Interesting thoughts on chat GP
has 500 million users a day.
They haven’t really monetised it yet.
Yeah I guess they are trying to walk the tightrope between investing on product development too little (maximising short-term profit but risking falling behind their competitors in model development) and too much (incurring massive losses while staying ahead of the competition)
Not sure if this has been pointed out, but Re: the root/pervasiveness of litter trays in schools for students identifying as animals:
I believe there are schools in the USA who, as part of lock down procedures for active shooters, keep a bucket of cat litter in each classroom. If the school was locked down for many hours, the people in the classrooms would have a makeshift toilet to use.
About training data for LLMs, there’s still a huge amount of non-public data that will find its way into training, whether we like it or not. Moreover, there’s strong evidence synthetic data can continue to be generated and used. That’s how Alpha Zero became so good and unpredictable at Go – not by training on past human games, but by generating billions of new games and learning what works as it continues to tweak the underlying parameters.
As well as data, there’s investments in algorithmic improvements (as Josh mentioned) such as the reasoning models used by ChatGPT o1, o3 etc and DeepSeek and there’s an enormous investment on hardware and data centres that will still fuel exponential growth. Whether those turn out to be good investments, no-one knows, but it’s a certainty they’ll improve the capacity and resources available for future LLM training and usage.
Great content, agreements and laughter, disagreements and push back. Josh poked a few holes in some of Sacha’s views and she engaged and pushed back herself.
The ‘nazi funding’ disagreement is a perfect example of a very ‘Szeps’ uncomfortable conversation
The times that she laughed was out of disrespect for Josh's opinion. She was Not interested in entertaining other points of view. Her opinion of Anyone having a sub stack account is intentionally funding. Nazis is patently ridiculous
Good chat