23 Comments

The core issue with Elon's ownership of twitter is he doesn't understand what the value of the platform is to most users. Its main use is for people to consume content, the vast majority of users tweet very little and are using it as a combination RSS feed/comment section. But he seems to think most people are on there to post, like some old school forum. So you have him making features that mostly appeal to people who post a lot, and trying to charge money to the small minority of people who actually make the content.

Like Lebron without Twitter is still Lebron, while Twitter without Lebron is GeoCities without the sense of graphic design. I'm now sort of curious if Elon was in charge of Netflix, if he would try to charge rightsholders to be on the platform.

Expand full comment

This terrific article seems to be talking about two different topics--Twitter and AI--but in fact there is a problem both share: the people making the decisions are not like the rest of us, and, not only that, they lack awareness of how different they are from most people. So, Musk, as you note, has the false belief that he can scold and browbeat people into purchasing his product. He may be unaware of how other people operate because of his Asperger’s, or because he is surrounded by fanboys, but for whatever reason, he is making decisions based on a unique mindset, and he doesn’t realize it. This mismatch doesn’t bode well for Twitter’s bottom line. (Meta is a telling counterexample here too: Zuckerberg invested heavily in VR, presumably because he and nerds like him enjoy it. But now that he is seeing that regular people don’t like VR, he has had the sense to shift his company away from it.)

Similarly, as you note, the people who are freaking out about AI “have otherwise never thought deeply about public policy and who have no concept of how their prescriptions might intersect with broader political and economic realities.” The “rationalists” (I always laugh at how irrational the rationalist community can be) are unaware that policy decisions and consumer choice will mitigate the threat of any technology. In any case, rationalists aside, normal people are responding to AI exactly as we would predict, and nothing is particularly scary. They’re taking care of mundane tasks, goofing around, and enhancing their work. People aren’t plotting to exterminate humanity using AI. That particular fear exists in the heads of a small, very unusual group of people, who don’t realize how unusual they are.

I’m reminded of Socrates, who said that he knows that he knows nothing, which is more than most people know.

Expand full comment

I'm a (very) casual reader of some "rationalist community" persons, and I have very mixed feelings about them as a group. I think some, like Scott Alexander and possibly Zvi Moshowitz, do a good job at engaging public policy and the way public policy decisions are incentivized and implemented.

Others....I don't know. I tried reading Yudkowski's ~800 page book on rationalism. I read only about 10 or 20 percent of it. Not only was I not impressed. I was also turned off to "rationalism" as he seemed to define or promote it. That's probably mostly because of the style and tone he adopts and not the merits. But style and tone matter at least a little bit, even if I put too much stress on it.

This AI thing does scare me, primarily because a lot of people a lot smarter than me (that group includes Mr. Yudkowsky) seem to believe it. Smart people who believe stupid and wrong things is not an unusual happening, and I hope this AI scare is an example of it. But still....I just don't know.

Expand full comment

Being self-defeatingly rude is apparently Elon’s brand. I’m pretty close to the exact target audience for a Tesla (liberal-ish suburbanite who doesn’t take many long road trips, and likes cool things with environmental branding). But I also don’t want political comments on my car, because that would be annoying. So I will wait until another brand has a similarly cool electric car.

Expand full comment

It should be understood that this "new" Elom is not some sort of revelation. He has always been a crass piece of personal garbage--the conceit always was that you could operate with and around the personal garbage to achieve something great.

I think the c. 2023 revelation is that the crass garbage isn't a side effect or whatever, but is the core product.

Expand full comment

Similarly, I (mostly) enjoy what SpaceX does on a daily basis. His sophomoric nonsense is souring my enjoyment of the company and its achievements, despite the fact that he is not involved in the aspects of engineering that make it interesting to me.

At some point, either he needs a moment of self-awareness to see how his behavior is ruining the image of his associated brands, or those brands need to sink so far that he cannot ignore the consequences of his image upon their success. Sadly, the latter seems much more likely to happen.

Expand full comment

To chime in...I work in the vehicle electrification space and Tesla legitimately has had a tremendous and positive impact on the US EV market. The Supercharger network, the software innovations, and just designing EVs that people are excited about bc they’re cool and not the car equivalent of a hair shirt. It’s incredible how he’s destroying his reputation now. To Sharty’s point, yes he was always a jacka** but his business was good! (Not perfect but truly Tesla set a standard for other car companies to chase after)

Expand full comment

If you're interested in somebody spoiling the experience (sorry), it's not a Twitter effect--SpaceX (and I assume Tesla) also treats their people like garbage. "You didn't predict this new problem, ergo you're fired", mandatory regular Saturday meetings, just a gigantic assbag.

Musk isn't nice or cute or far-seeing; he's an incompetent pile of garbage. Don't feel obligated to root for him or his pet projects.

Expand full comment

I think there is a fallacy in the AI section. It seems to say:

1. Certain people are saying AI could kill us all

2. Their prescriptions to fix this are impractical and won’t work

3. Therefore AI won’t kill us all

It’s possible their fixes are wrongheaded but their underlying concerns are not.

Expand full comment

Exactly.

1. Smug, annoying people who enjoy great prominence are issuing a dire warning about AI and advocating impractical solutions.

2. We can point out how impractical and silly their solutions are.

3. The problem (smug annoying people who enjoy great prominence) has been dealt with, at least for the moment. Oh, the relief!

But their actual claim/warning hasn’t been addressed.

Expand full comment

AI people (of which I'm one) can warn about the direction in which AI is heading. It's up to policy people (like Josh) to think critically about policy solutions.

Yudkowsky might not have good advice about foreign policy, but he's trying to convey that the AI problem is very serious and very urgent. When he says that war with China would be preferable to complacency about AI, he's trying to convey the magnitude of the AI issue.

Expand full comment

Regarding Mastodon: I've certain seen some of this HOA-like attitude, but ultimately it hasn't affected my experience. Also, its being influenced by a lot of the ex-pats. For example, previously there's been a bit of a culture against full-index searching; you can only search by hashtag. But there are some tools coming online to remedy that to a degree and I think the lead developer of Mastodon has been more ambivalent about implementing it.

Substack Notes should implement the ActivityPub protocol, then it could integrate with Mastodon servers and those of other networks with the technology integrated. If the HOA-ers don't like it, they can block it. The rest of the network can follow those accounts if they like. I'm not sure if there are subscription-only features to SN, but that can be resolved (it's been done for the WordPress plugin).

Expand full comment

“I am sure that AI will pose problems... but this doomsday stuff just marks its proponents as bored dilettantes with no idea how policy or international relations work.

So I assume that in four years, this whole panic will be forgotten, just like how nobody talks about UBI anymore.”

Of course, your bets/predictions are, at least, somewhat aligned. Even if AI doesn’t destroy us all in a nightmare sci-if scenario, if it kills enough jobs (and it may very well) you’ll be hearing quite a bit about UBI.

Expand full comment

I work in applied ML (biotech) and I’m simultaneously very excited about these types of models and exasperated by how they seem to have tricked otherwise intelligent people into believing they represent anything more than a request-response loop. Like, a LLM doesn’t (and probably can’t? Although I’ve been wrong before) have anything like a continuous conscious experience - it doesn’t have “motivations” that would cause it to pursue any goals. The whole discussion about “AI alignment” presupposes some notion of general human values that we’re supposed to align these tools to - this makes sense on the small scale (the models should do what I want to do to the best of its ability, although by the time you’re writing precise machine-readable specifications just write the freaking code), but like, am I “aligned” with other humans who share different ideological goals? Some humans even believe that there are too many humans!

They’re incredibly useful for software development (the Noah Smith analogy about machine tools is pretty good, although I’d compare it more to CAD software as used by mechanical engineers), querying large preloaded databases (preloaded in the form of training), and writing silly stories. Some of those things are pretty cool!

Whenever anyone tries to go down this AI apocalypse doom loop, I think it’s helpful to replace “AI” with “a really smart person” - could a really smart person figure out how to launch the nukes? Or could they do complex gain of function research by themselves and create a plague? Like, maybe? But it’s not clear to me that these LLMs will ever become “smarter” than a human, maybe they top out at human level? But you’re still limited by what humans can or cannot do.

Expand full comment

The fear that AI could get out of control does not presuppose that it has anything like a conscious experience. The question of whethe ChatGPT is conscious is not relevant. And if the AI is replaceable by “a really smart person” then it’s not scary. AI gets scary when it is much smarter than a really smart person — like the way stockfish is much better at chess than even the best chess players — and it acts in a way that is technically compliant with it’s code (which it must) but which we failed to anticipate. I don’t think either of these things are nearly implausible enough to dismiss out of hand.

Expand full comment

This consciousness thing is such an annoying red herring. Everyone snickers, case closed. But an AGI does not need consciousness to be able to solve a wide variety of problems very quickly. In attempting to achieve the goals we give it, it will inevitably solve problems in ways we don't expect or like.

Expand full comment

> So I assume that in four years, this whole panic [about AI] will be forgotten, just like how nobody talks about UBI anymore.

Big fan of the site, but this take is wildly wrong. Four years from now, AI will be an even bigger part of our lives than it is right now. The people who warned that AI would take people's jobs still think that's true, but they've grown increasingly worried about a world in which humans lose control of our collective destiny.

You might think that's absurd, but consider what the world would be like if all the lawyers were AIs and all the programmers were AIs. Imagine that all the best work in economics was done by humans who used AI assistance, but people speculated that the AI could do the economics without human assistance.

I work as an AI researcher, and I can tell you we're barreling toward that world. I would bet you at 5:1 odds that people are panicking about this more in 4 years from now than they are now.

Expand full comment

I personally think (with the caveat that I know nothing about the technology) that people are worried about AI for the wrong reasons. I'm not worried about them "taking over" or other sci-fi scenarios. No, I'm worried about programmers relying completely on AI to write the code for things, and then no one actually has a handle on that code. Then, without any sci-fi stuff, you could have major software glitches in systems (pick whatever system could cause havoc that you choose, financial, air traffic control, whatever) that no one expected or knows how to fix because they've relied upon the AI to get it right. Doesn't require sentience, I Robot scenarios or anything that spectacular.

Expand full comment

On AI, you aren't actually engaging with the arguments that those who are concerned have for their core beliefs. Yeah, 6 months moratorium is kind of silly and half baked, and they don't seem to have very good other solutions, other than "we need more research". But the core problem (it seems to me) is well laid out, and sometimes problems genuinely don't have an easy (or even moderately difficult) solution. At the very least, it seems clear that there IS a path from where we are to apocalyptic AI with a relatively short timeline, which means that it is important to have people working on it, and trying to convince the rest of the world to pay attention

Expand full comment

Every Mastodon server has a set of AUP, just like every other service on the internet. The difference is that each server can set their own policies, including which other servers they interact with.

But! Mostly nobody cares. Join one of the big ones and you won't even notice.

Expand full comment

Out in meatspace, I lead a team that documents an engineering software suite. We specialize in describing to our customers how and why they should use specific features, and in what ways and combinations are appropriate.

Contra a couple emails from higher management, I am not so worried about large-language-model artificial "intelligence" products. The bullshit they produce is very impressive, but it does not appear to be on a convergent track with knowledge or understanding.

Expand full comment

Substack notes: To each their own. One of the main reasons I don't like Twitter and didn't like it from the beginning seems to apply to Substack notes. I just don't see the appeal of reading or writing 140 character comments (or however many characters are allowed). I guess Twitter has extra baggage over the last few years that makes it distinct from Substack notes. But if a "note" walks like a duck and tweets like a duck, then it's probably a tweet.

Expand full comment

Re: Twitter, do you have any feelings about the fact that some of your audience discovered you there? Not in a sense of responsibility, just a feeling about the loss of something that has been valuable.

RE AI: the pause seems like a request to let big players form their counter attack vs upstarts.

Expand full comment