New & upvoted

Customize feedCustomize feed
NEW
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
104
Cullen
1d
0
I am not under any non-disparagement obligations to OpenAI. It is important to me that people know this, so that they can trust any future policy analysis or opinions I offer. I have no further comments at this time.
This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder: The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual aid and political activism. My go-to reaction to this critique has become something like “well you don’t need to prioritize vast abstract future generations to care about pandemics or nuclear war, those are very real things that could, with non-trivial probability, face us in our lifetimes.” I think this response has taken hold in general among people who talk about X-risk. This probably makes sense for pragmatic reasons. It’s a very good rebuttal to the “cold and heartless utilitarianism/pascal's mugging” critique. But I think it unfortunately neglects the critical point that longtermism, when taken really seriously — at least the sort of longtermism that MacAskill writes about in WWOTF, or Joe Carlsmith writes about in his essays — is full of care and love and duty. Reading the thought experiment that opens the book about living every human life in sequential order reminded me of this. I wish there were more people responding to the “longtermism is cold and heartless” critique by making the case that no, longtermism at face value is worth preserving because it's the polar opposite of heartless. Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors by default. It’s the ultimate act of care. And it’s definitely concerned with justice. (I mean, you can also find longtermism worthy because of something something math and cold utilitarianism. That’s not out of the question. I just don’t think it’s the only way to reach that conclusion.)
I just looked at [ANONYMOUS PERSON]'s donations. The amount that this person has donated in their life is more than double the amount that I have ever earned in my life. This person appears to be roughly the same age as I am (we graduated from college ± one year of each other). Oof. It makes me wish that I had taken steps to become a software developer back when I was 15 or 18 or 22. Oh, well. As they say, comparison is the thief of joy. I'll try to focus on doing the best I can with the hand I'm dealt.
Are there currently any safety-conscious people on the OpenAI Board?
[PHOTO] I sent 19 emails to politicians, had 4 meetings, and now I get emails like this. There is SO MUCH low hanging fruit in just doing this for 30 minutes a day (I would do it but my LTFF funding does not cover this). Someone should do this!

Popular comments

Recent discussion

Introduction:

I’ve written quite a few articles casting doubt on several aspects of the AI doom narrative. (I’ve starting archiving them on my substack for easier sharing). This article is my first attempt to link them together to form a connected argument for why I find...

Continue reading
2
Vasco Grilo
Hi Erich, Note humans are also trained on all those abilities, but no single human is trained to be a specialist in all those areas. Likewise for AIs.

Yes, that's true. Can you spell out for me what you think that implies in a little more detail?

2
Vasco Grilo
Great post, titotal! The link is broken.

In the past few weeks, I spoke with several people interested in EA and wondered: What do others recommend in this situation in terms of media to consume first (books, blog posts, podcasts)?

Isn't it time we had a comprehensive guide on which introductory EA books or media to recommend to different people, backed by data?

Such a resource could consider factors like background, interests, and learning preferences, ensuring the most impactful material is suggested for each individual. Wouldn’t this tailored approach make promoting EA among friends and acquaintances more effective and engaging?

Continue reading

Remember: EA institutions actively push talented people into the companies making the world changing tech the public have said THEY DONT WANT. This is where the next big EA PR crisis will come from (50%). Except this time it won’t just be the tech bubble.

Continue reading

Is this about the safety teams at capabilities labs?

If so, I consider it a non-obvious issue, whether pushing a talented people into an AI safety role at, e.g., DeepMind is a bad thing. If you think that is a bad thing, consider providing a more detailed argument, and writing a top-level post explaining your view.

If, instead, this is about EA institutions pushing people into capabilities roles, consider naming these concrete examples. As an example, 80k has a job advertising a role as a prompt engineer at Scale AI. That does not seem to be a very safety-focused role, and it is not clear how 80k wants to help prevent human extinction with that job ad.

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

A brief overview of recent OpenAI departures (Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Pavel Izmailov, William Saunders, Ryan Lowe Cullen O'Keefe[1]). Will add other relevant media pieces below as I come across them.


Some quotes perhaps worth highlighting...

Continue reading
30
Larks
Kelsey suggests that OpenAI may be admitting defeat here: https://twitter.com/KelseyTuoc/status/1791691267941990764
12
Rebecca
What about for people who’ve already resigned?

Are there currently any safety-conscious people on the OpenAI Board?

Continue reading

 [memetic status: stating directly despite it being a clear consequence of core AI risk knowledge because many people have "but nature will survive us" antibodies to other classes of doom and misapply them here.]

Unfortunately, no.[1]

Technically, “Nature”, ...

Continue reading

I just wanted to say that the new aisafety.info website looks great! I have not looked at everything in detail, just clicking around a bit, but the article seem of good quality to me.

I will probably mainly recommend aisafety as an introductory resource.

2
Owen Cotton-Barratt
I think this is a plausible consequence, but not a clear one. Many people put significant value on conservation. It is plausible that some version of this would survive in an AI which was somewhat misaligned (especially since conservation might be a reasonably simple goal to point towards), such that it would spend some fraction of its resources towards preserving nature -- and one planet is a tiny fraction of the resources it could expect to end up with. The most straightforward argument against this is that such an AI maybe wouldn't wipe out all humans. I tend to agree, and a good amount of my probability mass on "existential catastrophe from misaligned AI" does not involve human extinction. But I think there's some possible middle ground where an AI was not capable of reliably seizing power without driving humans extinct, but was capable if it allowed itself to do so, could wipe them out without eliminating nature (which would presumably pose much less threat to its ascendancy).

Pause AI was relatively small in scale. I feel like AI is in great need of protest. Protesting for increased regulation and safety, layoff compensation, etc.

A lot of what EA wants in terms of AI can be protested for.

I feel like the EA community should protest more? What...

Continue reading

Thank you for your response. My impression is that big or small, every individuals additional contribution to a protest is roughly proportional to the impact of the protest. This meaning that its just as impactful for people to have small scale protests.

1
sammyboiz
Thank you for your response! Along with the EA community, I too am scared of doing activism for something controversial and bizarre like AI safety
Joseph Lemien posted a Quick Take

I just looked at [ANONYMOUS PERSON]'s donations. The amount that this person has donated in their life is more than double the amount that I have ever earned in my life. This person appears to be roughly the same age as I am (we graduated from college ± one year of each other). Oof. It makes me wish that I had taken steps to become a software developer back when I was 15 or 18 or 22.

Oh, well. As they say, comparison is the thief of joy. I'll try to focus on doing the best I can with the hand I'm dealt.

Continue reading

I. Introduction and a prima facie case

It seems to me that most (perhaps all) effective altruists believe that:

  1. The global economy’s current mode of allocating resources is suboptimal. (Otherwise, why would effective altruism be necessary?)
  2. Individuals and institutions can
...
Continue reading

Even more reason to think that transitioning to socialism is not tractable - some people will fight against it like hell!

1
mhendric
I am similarly unenthused about the weird geneticism.  Insofar as somewhat more altruism in the economy is the aim, sure, why not! I'm not opposed to that, and you may think that e.g. giving pledges or founders pledge are already steps in that direction. But that seems different from what most people think of when you say socialism, which they associate with ownership of means of production, or very heavy state interventionism and planned economy! It feels a tiny bit bailey and motte ish. To give a bit of a hooray for the survey numbers - at the German unconference, I organized a fishbowl-style debate on economic systems. I was pretty much the only person defending a free market economy, with maybe 3-5 people silently supportive and a good 25 or so folks arguing for strong interventionism and socialism. I think this is pretty representative of the German EA community at least, so there may be country differences.